Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur
2012-01-01
This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system. PMID:22736956
Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur
2012-01-01
This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system.
An embedded vision system for an unmanned four-rotor helicopter
NASA Astrophysics Data System (ADS)
Lillywhite, Kirt; Lee, Dah-Jye; Tippetts, Beau; Fowers, Spencer; Dennis, Aaron; Nelson, Brent; Archibald, James
2006-10-01
In this paper an embedded vision system and control module is introduced that is capable of controlling an unmanned four-rotor helicopter and processing live video for various law enforcement, security, military, and civilian applications. The vision system is implemented on a newly designed compact FPGA board (Helios). The Helios board contains a Xilinx Virtex-4 FPGA chip and memory making it capable of implementing real time vision algorithms. A Smooth Automated Intelligent Leveling daughter board (SAIL), attached to the Helios board, collects attitude and heading information to be processed in order to control the unmanned helicopter. The SAIL board uses an electrolytic tilt sensor, compass, voltage level converters, and analog to digital converters to perform its operations. While level flight can be maintained, problems stemming from the characteristics of the tilt sensor limits maneuverability of the helicopter. The embedded vision system has proven to give very good results in its performance of a number of real-time robotic vision algorithms.
A robust embedded vision system feasible white balance algorithm
NASA Astrophysics Data System (ADS)
Wang, Yuan; Yu, Feihong
2018-01-01
White balance is a very important part of the color image processing pipeline. In order to meet the need of efficiency and accuracy in embedded machine vision processing system, an efficient and robust white balance algorithm combining several classical ones is proposed. The proposed algorithm mainly has three parts. Firstly, in order to guarantee higher efficiency, an initial parameter calculated from the statistics of R, G and B components from raw data is used to initialize the following iterative method. After that, the bilinear interpolation algorithm is utilized to implement demosaicing procedure. Finally, an adaptive step adjustable scheme is introduced to ensure the controllability and robustness of the algorithm. In order to verify the proposed algorithm's performance on embedded vision system, a smart camera based on IMX6 DualLite, IMX291 and XC6130 is designed. Extensive experiments on a large amount of images under different color temperatures and exposure conditions illustrate that the proposed white balance algorithm avoids color deviation problem effectively, achieves a good balance between efficiency and quality, and is suitable for embedded machine vision processing system.
Ehsan, Shoaib; Clark, Adrian F.; ur Rehman, Naveed; McDonald-Maier, Klaus D.
2015-01-01
The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems. PMID:26184211
Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D
2015-07-10
The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.
Development of embedded real-time and high-speed vision platform
NASA Astrophysics Data System (ADS)
Ouyang, Zhenxing; Dong, Yimin; Yang, Hua
2015-12-01
Currently, high-speed vision platforms are widely used in many applications, such as robotics and automation industry. However, a personal computer (PC) whose over-large size is not suitable and applicable in compact systems is an indispensable component for human-computer interaction in traditional high-speed vision platforms. Therefore, this paper develops an embedded real-time and high-speed vision platform, ER-HVP Vision which is able to work completely out of PC. In this new platform, an embedded CPU-based board is designed as substitution for PC and a DSP and FPGA board is developed for implementing image parallel algorithms in FPGA and image sequential algorithms in DSP. Hence, the capability of ER-HVP Vision with size of 320mm x 250mm x 87mm can be presented in more compact condition. Experimental results are also given to indicate that the real-time detection and counting of the moving target at a frame rate of 200 fps at 512 x 512 pixels under the operation of this newly developed vision platform are feasible.
Computer vision camera with embedded FPGA processing
NASA Astrophysics Data System (ADS)
Lecerf, Antoine; Ouellet, Denis; Arias-Estrada, Miguel
2000-03-01
Traditional computer vision is based on a camera-computer system in which the image understanding algorithms are embedded in the computer. To circumvent the computational load of vision algorithms, low-level processing and imaging hardware can be integrated in a single compact module where a dedicated architecture is implemented. This paper presents a Computer Vision Camera based on an open architecture implemented in an FPGA. The system is targeted to real-time computer vision tasks where low level processing and feature extraction tasks can be implemented in the FPGA device. The camera integrates a CMOS image sensor, an FPGA device, two memory banks, and an embedded PC for communication and control tasks. The FPGA device is a medium size one equivalent to 25,000 logic gates. The device is connected to two high speed memory banks, an IS interface, and an imager interface. The camera can be accessed for architecture programming, data transfer, and control through an Ethernet link from a remote computer. A hardware architecture can be defined in a Hardware Description Language (like VHDL), simulated and synthesized into digital structures that can be programmed into the FPGA and tested on the camera. The architecture of a classical multi-scale edge detection algorithm based on a Laplacian of Gaussian convolution has been developed to show the capabilities of the system.
Vision-based Nano Robotic System for High-throughput Non-embedded Cell Cutting
NASA Astrophysics Data System (ADS)
Shang, Wanfeng; Lu, Haojian; Wan, Wenfeng; Fukuda, Toshio; Shen, Yajing
2016-03-01
Cell cutting is a significant task in biology study, but the highly productive non-embedded cell cutting is still a big challenge for current techniques. This paper proposes a vision-based nano robotic system and then realizes automatic non-embedded cell cutting with this system. First, the nano robotic system is developed and integrated with a nanoknife inside an environmental scanning electron microscopy (ESEM). Then, the positions of the nanoknife and the single cell are recognized, and the distance between them is calculated dynamically based on image processing. To guarantee the positioning accuracy and the working efficiency, we propose a distance-regulated speed adapting strategy, in which the moving speed is adjusted intelligently based on the distance between the nanoknife and the target cell. The results indicate that the automatic non-embedded cutting is able to be achieved within 1-2 mins with low invasion benefiting from the high precise nanorobot system and the sharp edge of nanoknife. This research paves a way for the high-throughput cell cutting at cell’s natural condition, which is expected to make significant impact on the biology studies, especially for the in-situ analysis at cellular and subcellular scale, such as cell interaction investigation, neural signal transduction and low invasive cell surgery.
Embedded System Implementation on FPGA System With μCLinux OS
NASA Astrophysics Data System (ADS)
Fairuz Muhd Amin, Ahmad; Aris, Ishak; Syamsul Azmir Raja Abdullah, Raja; Kalos Zakiah Sahbudin, Ratna
2011-02-01
Embedded systems are taking on more complicated tasks as the processors involved become more powerful. The embedded systems have been widely used in many areas such as in industries, automotives, medical imaging, communications, speech recognition and computer vision. The complexity requirements in hardware and software nowadays need a flexibility system for further enhancement in any design without adding new hardware. Therefore, any changes in the design system will affect the processor that need to be changed. To overcome this problem, a System On Programmable Chip (SOPC) has been designed and developed using Field Programmable Gate Array (FPGA). A softcore processor, NIOS II 32-bit RISC, which is the microprocessor core was utilized in FPGA system together with the embedded operating system(OS), μClinux. In this paper, an example of web server is explained and demonstrated
Real-time multiple objects tracking on Raspberry-Pi-based smart embedded camera
NASA Astrophysics Data System (ADS)
Dziri, Aziz; Duranton, Marc; Chapuis, Roland
2016-07-01
Multiple-object tracking constitutes a major step in several computer vision applications, such as surveillance, advanced driver assistance systems, and automatic traffic monitoring. Because of the number of cameras used to cover a large area, these applications are constrained by the cost of each node, the power consumption, the robustness of the tracking, the processing time, and the ease of deployment of the system. To meet these challenges, the use of low-power and low-cost embedded vision platforms to achieve reliable tracking becomes essential in networks of cameras. We propose a tracking pipeline that is designed for fixed smart cameras and which can handle occlusions between objects. We show that the proposed pipeline reaches real-time processing on a low-cost embedded smart camera composed of a Raspberry-Pi board and a RaspiCam camera. The tracking quality and the processing speed obtained with the proposed pipeline are evaluated on publicly available datasets and compared to the state-of-the-art methods.
Design and control of an embedded vision guided robotic fish with multiple control surfaces.
Yu, Junzhi; Wang, Kai; Tan, Min; Zhang, Jianwei
2014-01-01
This paper focuses on the development and control issues of a self-propelled robotic fish with multiple artificial control surfaces and an embedded vision system. By virtue of the hybrid propulsion capability in the body plus the caudal fin and the complementary maneuverability in accessory fins, a synthesized propulsion scheme including a caudal fin, a pair of pectoral fins, and a pelvic fin is proposed. To achieve flexible yet stable motions in aquatic environments, a central pattern generator- (CPG-) based control method is employed. Meanwhile, a monocular underwater vision serves as sensory feedback that modifies the control parameters. The integration of the CPG-based motion control and the visual processing in an embedded microcontroller allows the robotic fish to navigate online. Aquatic tests demonstrate the efficacy of the proposed mechatronic design and swimming control methods. Particularly, a pelvic fin actuated sideward swimming gait was first implemented. It is also found that the speeds and maneuverability of the robotic fish with coordinated control surfaces were largely superior to that of the swimming robot propelled by a single control surface.
Design and Control of an Embedded Vision Guided Robotic Fish with Multiple Control Surfaces
Wang, Kai; Tan, Min; Zhang, Jianwei
2014-01-01
This paper focuses on the development and control issues of a self-propelled robotic fish with multiple artificial control surfaces and an embedded vision system. By virtue of the hybrid propulsion capability in the body plus the caudal fin and the complementary maneuverability in accessory fins, a synthesized propulsion scheme including a caudal fin, a pair of pectoral fins, and a pelvic fin is proposed. To achieve flexible yet stable motions in aquatic environments, a central pattern generator- (CPG-) based control method is employed. Meanwhile, a monocular underwater vision serves as sensory feedback that modifies the control parameters. The integration of the CPG-based motion control and the visual processing in an embedded microcontroller allows the robotic fish to navigate online. Aquatic tests demonstrate the efficacy of the proposed mechatronic design and swimming control methods. Particularly, a pelvic fin actuated sideward swimming gait was first implemented. It is also found that the speeds and maneuverability of the robotic fish with coordinated control surfaces were largely superior to that of the swimming robot propelled by a single control surface. PMID:24688413
Always-on low-power optical system for skin-based touchless machine control.
Lecca, Michela; Gottardi, Massimo; Farella, Elisabetta; Milosevic, Bojan
2016-06-01
Embedded vision systems are smart energy-efficient devices that capture and process a visual signal in order to extract high-level information about the surrounding observed world. Thanks to these capabilities, embedded vision systems attract more and more interest from research and industry. In this work, we present a novel low-power optical embedded system tailored to detect the human skin under various illuminant conditions. We employ the presented sensor as a smart switch to activate one or more appliances connected to it. The system is composed of an always-on low-power RGB color sensor, a proximity sensor, and an energy-efficient microcontroller (MCU). The architecture of the color sensor allows a hardware preprocessing of the RGB signal, which is converted into the rg space directly on chip reducing the power consumption. The rg signal is delivered to the MCU, where it is classified as skin or non-skin. Each time the signal is classified as skin, the proximity sensor is activated to check the distance of the detected object. If it appears to be in the desired proximity range, the system detects the interaction and switches on/off the connected appliances. The experimental validation of the proposed system on a prototype shows that processing both distance and color remarkably improves the performance of the two separated components. This makes the system a promising tool for energy-efficient, touchless control of machines.
Teixidó, Mercè; Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Nogués, Miquel; Palacín, Jordi
2012-10-22
This work proposes the development of an embedded real-time fruit detection system for future automatic fruit harvesting. The proposed embedded system is based on an ARM Cortex-M4 (STM32F407VGT6) processor and an Omnivision OV7670 color camera. The future goal of this embedded vision system will be to control a robotized arm to automatically select and pick some fruit directly from the tree. The complete embedded system has been designed to be placed directly in the gripper tool of the future robotized harvesting arm. The embedded system will be able to perform real-time fruit detection and tracking by using a three-dimensional look-up-table (LUT) defined in the RGB color space and optimized for fruit picking. Additionally, two different methodologies for creating optimized 3D LUTs based on existing linear color models and fruit histograms were implemented in this work and compared for the case of red peaches. The resulting system is able to acquire general and zoomed orchard images and to update the relative tracking information of a red peach in the tree ten times per second.
Teixidó, Mercè; Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Nogués, Miquel; Palacín, Jordi
2012-01-01
This work proposes the development of an embedded real-time fruit detection system for future automatic fruit harvesting. The proposed embedded system is based on an ARM Cortex-M4 (STM32F407VGT6) processor and an Omnivision OV7670 color camera. The future goal of this embedded vision system will be to control a robotized arm to automatically select and pick some fruit directly from the tree. The complete embedded system has been designed to be placed directly in the gripper tool of the future robotized harvesting arm. The embedded system will be able to perform real-time fruit detection and tracking by using a three-dimensional look-up-table (LUT) defined in the RGB color space and optimized for fruit picking. Additionally, two different methodologies for creating optimized 3D LUTs based on existing linear color models and fruit histograms were implemented in this work and compared for the case of red peaches. The resulting system is able to acquire general and zoomed orchard images and to update the relative tracking information of a red peach in the tree ten times per second. PMID:23202040
Embedding Cognitive Systems into Systems Engineering Practice
2008-12-01
application. Brahms, developed out of NASA Ames, is free for research and to the An Analysis of Alternatives consists of eight steps 1. Determine...said to look like a Star Trek ™ control panel. Fields dynamically resize when user click on them. This is helpful for those with vision degradation
Evaluation of 5 different labeled polymer immunohistochemical detection systems.
Skaland, Ivar; Nordhus, Marit; Gudlaugsson, Einar; Klos, Jan; Kjellevold, Kjell H; Janssen, Emiel A M; Baak, Jan P A
2010-01-01
Immunohistochemical staining is important for diagnosis and therapeutic decision making but the results may vary when different detection systems are used. To analyze this, 5 different labeled polymer immunohistochemical detection systems, REAL EnVision, EnVision Flex, EnVision Flex+ (Dako, Glostrup, Denmark), NovoLink (Novocastra Laboratories Ltd, Newcastle Upon Tyne, UK) and UltraVision ONE (Thermo Fisher Scientific, Fremont, CA) were tested using 12 different, widely used mouse and rabbit primary antibodies, detecting nuclear, cytoplasmic, and membrane antigens. Serial sections of multitissue blocks containing 4% formaldehyde fixed paraffin embedded material were selected for their weak, moderate, and strong staining for each antibody. Specificity and sensitivity were evaluated by subjective scoring and digital image analysis. At optimal primary antibody dilution, digital image analysis showed that EnVision Flex+ was the most sensitive system (P < 0.005), with means of 8.3, 13.4, 20.2, and 41.8 gray scale values stronger staining than REAL EnVision, EnVision Flex, NovoLink, and UltraVision ONE, respectively. NovoLink was the second most sensitive system for mouse antibodies, but showed low sensitivity for rabbit antibodies. Due to low sensitivity, 2 cases with UltraVision ONE and 1 case with NovoLink stained false negatively. None of the detection systems showed any distinct false positivity, but UltraVision ONE and NovoLink consistently showed weak background staining both in negative controls and at optimal primary antibody dilution. We conclude that there are significant differences in sensitivity, specificity, costs, and total assay time in the immunohistochemical detection systems currently in use.
Uranus: a rapid prototyping tool for FPGA embedded computer vision
NASA Astrophysics Data System (ADS)
Rosales-Hernández, Victor; Castillo-Jimenez, Liz; Viveros-Velez, Gilberto; Zuñiga-Grajeda, Virgilio; Treviño Torres, Abel; Arias-Estrada, M.
2007-01-01
The starting point for all successful system development is the simulation. Performing high level simulation of a system can help to identify, insolate and fix design problems. This work presents Uranus, a software tool for simulation and evaluation of image processing algorithms with support to migrate them to an FPGA environment for algorithm acceleration and embedded processes purposes. The tool includes an integrated library of previous coded operators in software and provides the necessary support to read and display image sequences as well as video files. The user can use the previous compiled soft-operators in a high level process chain, and code his own operators. Additional to the prototyping tool, Uranus offers FPGA-based hardware architecture with the same organization as the software prototyping part. The hardware architecture contains a library of FPGA IP cores for image processing that are connected with a PowerPC based system. The Uranus environment is intended for rapid prototyping of machine vision and the migration to FPGA accelerator platform, and it is distributed for academic purposes.
NASA Astrophysics Data System (ADS)
Ren, Y. J.; Zhu, J. G.; Yang, X. Y.; Ye, S. H.
2006-10-01
The Virtex-II Pro FPGA is applied to the vision sensor tracking system of IRB2400 robot. The hardware platform, which undertakes the task of improving SNR and compressing data, is constructed by using the high-speed image processing of FPGA. The lower level image-processing algorithm is realized by combining the FPGA frame and the embedded CPU. The velocity of image processing is accelerated due to the introduction of FPGA and CPU. The usage of the embedded CPU makes it easily to realize the logic design of interface. Some key techniques are presented in the text, such as read-write process, template matching, convolution, and some modules are simulated too. In the end, the compare among the modules using this design, using the PC computer and using the DSP, is carried out. Because the high-speed image processing system core is a chip of FPGA, the function of which can renew conveniently, therefore, to a degree, the measure system is intelligent.
Bio-inspired approach for intelligent unattended ground sensors
NASA Astrophysics Data System (ADS)
Hueber, Nicolas; Raymond, Pierre; Hennequin, Christophe; Pichler, Alexander; Perrot, Maxime; Voisin, Philippe; Moeglin, Jean-Pierre
2015-05-01
Improving the surveillance capacity over wide zones requires a set of smart battery-powered Unattended Ground Sensors capable of issuing an alarm to a decision-making center. Only high-level information has to be sent when a relevant suspicious situation occurs. In this paper we propose an innovative bio-inspired approach that mimics the human bi-modal vision mechanism and the parallel processing ability of the human brain. The designed prototype exploits two levels of analysis: a low-level panoramic motion analysis, the peripheral vision, and a high-level event-focused analysis, the foveal vision. By tracking moving objects and fusing multiple criteria (size, speed, trajectory, etc.), the peripheral vision module acts as a fast relevant event detector. The foveal vision module focuses on the detected events to extract more detailed features (texture, color, shape, etc.) in order to improve the recognition efficiency. The implemented recognition core is able to acquire human knowledge and to classify in real-time a huge amount of heterogeneous data thanks to its natively parallel hardware structure. This UGS prototype validates our system approach under laboratory tests. The peripheral analysis module demonstrates a low false alarm rate whereas the foveal vision correctly focuses on the detected events. A parallel FPGA implementation of the recognition core succeeds in fulfilling the embedded application requirements. These results are paving the way of future reconfigurable virtual field agents. By locally processing the data and sending only high-level information, their energy requirements and electromagnetic signature are optimized. Moreover, the embedded Artificial Intelligence core enables these bio-inspired systems to recognize and learn new significant events. By duplicating human expertise in potentially hazardous places, our miniature visual event detector will allow early warning and contribute to better human decision making.
NASA Astrophysics Data System (ADS)
Deng, Zhiwei; Li, Xicai; Shi, Junsheng; Huang, Xiaoqiao; Li, Feiyan
2018-01-01
Depth measurement is the most basic measurement in various machine vision, such as automatic driving, unmanned aerial vehicle (UAV), robot and so on. And it has a wide range of use. With the development of image processing technology and the improvement of hardware miniaturization and processing speed, real-time depth measurement using dual cameras has become a reality. In this paper, an embedded AM5728 and the ordinary low-cost dual camera is used as the hardware platform. The related algorithms of dual camera calibration, image matching and depth calculation have been studied and implemented on the hardware platform, and hardware design and the rationality of the related algorithms of the system are tested. The experimental results show that the system can realize simultaneous acquisition of binocular images, switching of left and right video sources, display of depth image and depth range. For images with a resolution of 640 × 480, the processing speed of the system can be up to 25 fps. The experimental results show that the optimal measurement range of the system is from 0.5 to 1.5 meter, and the relative error of the distance measurement is less than 5%. Compared with the PC, ARM11 and DMCU hardware platforms, the embedded AM5728 hardware is good at meeting real-time depth measurement requirements in ensuring the image resolution.
NASA Astrophysics Data System (ADS)
Mahapatra, Prasant Kumar; Sethi, Spardha; Kumar, Amod
2015-10-01
In conventional tool positioning technique, sensors embedded in the motion stages provide the accurate tool position information. In this paper, a machine vision based system and image processing technique for motion measurement of lathe tool from two-dimensional sequential images captured using charge coupled device camera having a resolution of 250 microns has been described. An algorithm was developed to calculate the observed distance travelled by the tool from the captured images. As expected, error was observed in the value of the distance traversed by the tool calculated from these images. Optimization of errors due to machine vision system, calibration, environmental factors, etc. in lathe tool movement was carried out using two soft computing techniques, namely, artificial immune system (AIS) and particle swarm optimization (PSO). The results show better capability of AIS over PSO.
A Linked List-Based Algorithm for Blob Detection on Embedded Vision-Based Sensors.
Acevedo-Avila, Ricardo; Gonzalez-Mendoza, Miguel; Garcia-Garcia, Andres
2016-05-28
Blob detection is a common task in vision-based applications. Most existing algorithms are aimed at execution on general purpose computers; while very few can be adapted to the computing restrictions present in embedded platforms. This paper focuses on the design of an algorithm capable of real-time blob detection that minimizes system memory consumption. The proposed algorithm detects objects in one image scan; it is based on a linked-list data structure tree used to label blobs depending on their shape and node information. An example application showing the results of a blob detection co-processor has been built on a low-powered field programmable gate array hardware as a step towards developing a smart video surveillance system. The detection method is intended for general purpose application. As such, several test cases focused on character recognition are also examined. The results obtained present a fair trade-off between accuracy and memory requirements; and prove the validity of the proposed approach for real-time implementation on resource-constrained computing platforms.
How the Air Force Should Stay Engaged in Computer Vision Technology Development
2007-04-01
present individuals. The survey 29 Paddy Comyn, "Sensing Forward to a Driverless Future," The Irish...34 Military Embedded Systems (2006). Comyn, Paddy. "Sensing Forward to a Driverless Future." The Irish Times 21 February 2007. Dakley, Norman C. The
Vosse, Bettine A H; Seelentag, Walter; Bachmann, Astrid; Bosman, Fred T; Yan, Pu
2007-03-01
The aim of this study was to evaluate specific immunostaining and background staining in formalin-fixed, paraffin-embedded human tissues with the 2 most frequently used immunohistochemical detection systems, Avidin-Biotin-Peroxidase (ABC) and EnVision+. A series of fixed tissues, including breast, colon, kidney, larynx, liver, lung, ovary, pancreas, prostate, stomach, and tonsil, was used in the study. Three monoclonal antibodies, 1 against a nuclear antigen (Ki-67), 1 against a cytoplasmic antigen (cytokeratin), and 1 against a cytoplasmic and membrane-associated antigen and a polyclonal antibody against a nuclear and cytoplasmic antigen (S-100) were selected for these studies. When the ABC system was applied, immunostaining was performed with and without blocking of endogenous avidin-binding activity. The intensity of specific immunostaining and the percentage of stained cells were comparable for the 2 detection systems. The use of ABC caused widespread cytoplasmic and rare nuclear background staining in a variety of normal and tumor cells. A very strong background staining was observed in colon, gastric mucosa, liver, and kidney. Blocking avidin-binding capacity reduced background staining, but complete blocking was difficult to attain. With the EnVision+ system no background staining occurred. Given the efficiency of the detection, equal for both systems or higher with EnVision+, and the significant background problem with ABC, we advocate the routine use of the EnVision+ system.
Handheld pose tracking using vision-inertial sensors with occlusion handling
NASA Astrophysics Data System (ADS)
Li, Juan; Slembrouck, Maarten; Deboeverie, Francis; Bernardos, Ana M.; Besada, Juan A.; Veelaert, Peter; Aghajan, Hamid; Casar, José R.; Philips, Wilfried
2016-07-01
Tracking of a handheld device's three-dimensional (3-D) position and orientation is fundamental to various application domains, including augmented reality (AR), virtual reality, and interaction in smart spaces. Existing systems still offer limited performance in terms of accuracy, robustness, computational cost, and ease of deployment. We present a low-cost, accurate, and robust system for handheld pose tracking using fused vision and inertial data. The integration of measurements from embedded accelerometers reduces the number of unknown parameters in the six-degree-of-freedom pose calculation. The proposed system requires two light-emitting diode (LED) markers to be attached to the device, which are tracked by external cameras through a robust algorithm against illumination changes. Three data fusion methods have been proposed, including the triangulation-based stereo-vision system, constraint-based stereo-vision system with occlusion handling, and triangulation-based multivision system. Real-time demonstrations of the proposed system applied to AR and 3-D gaming are also included. The accuracy assessment of the proposed system is carried out by comparing with the data generated by the state-of-the-art commercial motion tracking system OptiTrack. Experimental results show that the proposed system has achieved high accuracy of few centimeters in position estimation and few degrees in orientation estimation.
NASA Astrophysics Data System (ADS)
Jridi, Maher; Alfalou, Ayman
2017-05-01
By this paper, the major goal is to investigate the Multi-CPU/FPGA SoC (System on Chip) design flow and to transfer a know-how and skills to rapidly design embedded real-time vision system. Our aim is to show how the use of these devices can be benefit for system level integration since they make possible simultaneous hardware and software development. We take the facial detection and pretreatments as case study since they have a great potential to be used in several applications such as video surveillance, building access control and criminal identification. The designed system use the Xilinx Zedboard platform. The last is the central element of the developed vision system. The video acquisition is performed using either standard webcam connected to the Zedboard via USB interface or several camera IP devices. The visualization of video content and intermediate results are possible with HDMI interface connected to HD display. The treatments embedded in the system are as follow: (i) pre-processing such as edge detection implemented in the ARM and in the reconfigurable logic, (ii) software implementation of motion detection and face detection using either ViolaJones or LBP (Local Binary Pattern), and (iii) application layer to select processing application and to display results in a web page. One uniquely interesting feature of the proposed system is that two functions have been developed to transmit data from and to the VDMA port. With the proposed optimization, the hardware implementation of the Sobel filter takes 27 ms and 76 ms for 640x480, and 720p resolutions, respectively. Hence, with the FPGA implementation, an acceleration of 5 times is obtained which allow the processing of 37 fps and 13 fps for 640x480, and 720p resolutions, respectively.
A FPGA-based architecture for real-time image matching
NASA Astrophysics Data System (ADS)
Wang, Jianhui; Zhong, Sheng; Xu, Wenhui; Zhang, Weijun; Cao, Zhiguo
2013-10-01
Image matching is a fundamental task in computer vision. It is used to establish correspondence between two images taken at different viewpoint or different time from the same scene. However, its large computational complexity has been a challenge to most embedded systems. This paper proposes a single FPGA-based image matching system, which consists of SIFT feature detection, BRIEF descriptor extraction and BRIEF matching. It optimizes the FPGA architecture for the SIFT feature detection to reduce the FPGA resources utilization. Moreover, we implement BRIEF description and matching on FPGA also. The proposed system can implement image matching at 30fps (frame per second) for 1280x720 images. Its processing speed can meet the demand of most real-life computer vision applications.
A Linked List-Based Algorithm for Blob Detection on Embedded Vision-Based Sensors
Acevedo-Avila, Ricardo; Gonzalez-Mendoza, Miguel; Garcia-Garcia, Andres
2016-01-01
Blob detection is a common task in vision-based applications. Most existing algorithms are aimed at execution on general purpose computers; while very few can be adapted to the computing restrictions present in embedded platforms. This paper focuses on the design of an algorithm capable of real-time blob detection that minimizes system memory consumption. The proposed algorithm detects objects in one image scan; it is based on a linked-list data structure tree used to label blobs depending on their shape and node information. An example application showing the results of a blob detection co-processor has been built on a low-powered field programmable gate array hardware as a step towards developing a smart video surveillance system. The detection method is intended for general purpose application. As such, several test cases focused on character recognition are also examined. The results obtained present a fair trade-off between accuracy and memory requirements; and prove the validity of the proposed approach for real-time implementation on resource-constrained computing platforms. PMID:27240382
Optoelectronic instrumentation enhancement using data mining feedback for a 3D measurement system
NASA Astrophysics Data System (ADS)
Flores-Fuentes, Wendy; Sergiyenko, Oleg; Gonzalez-Navarro, Félix F.; Rivas-López, Moisés; Hernandez-Balbuena, Daniel; Rodríguez-Quiñonez, Julio C.; Tyrsa, Vera; Lindner, Lars
2016-12-01
3D measurement by a cyber-physical system based on optoelectronic scanning instrumentation has been enhanced by outliers and regression data mining feedback. The prototype has applications in (1) industrial manufacturing systems that include: robotic machinery, embedded vision, and motion control, (2) health care systems for measurement scanning, and (3) infrastructure by providing structural health monitoring. This paper presents new research performed in data processing of a 3D measurement vision sensing database. Outliers from multivariate data have been detected and removal to improve artificial intelligence regression algorithm results. Physical measurement error regression data has been used for 3D measurements error correction. Concluding, that the joint of physical phenomena, measurement and computation is an effectiveness action for feedback loops in the control of industrial, medical and civil tasks.
Machine Vision Within The Framework Of Collective Neural Assemblies
NASA Astrophysics Data System (ADS)
Gupta, Madan M.; Knopf, George K.
1990-03-01
The proposed mechanism for designing a robust machine vision system is based on the dynamic activity generated by the various neural populations embedded in nervous tissue. It is postulated that a hierarchy of anatomically distinct tissue regions are involved in visual sensory information processing. Each region may be represented as a planar sheet of densely interconnected neural circuits. Spatially localized aggregates of these circuits represent collective neural assemblies. Four dynamically coupled neural populations are assumed to exist within each assembly. In this paper we present a state-variable model for a tissue sheet derived from empirical studies of population dynamics. Each population is modelled as a nonlinear second-order system. It is possible to emulate certain observed physiological and psychophysiological phenomena of biological vision by properly programming the interconnective gains . Important early visual phenomena such as temporal and spatial noise insensitivity, contrast sensitivity and edge enhancement will be discussed for a one-dimensional tissue model.
FPGA-Based Multimodal Embedded Sensor System Integrating Low- and Mid-Level Vision
Botella, Guillermo; Martín H., José Antonio; Santos, Matilde; Meyer-Baese, Uwe
2011-01-01
Motion estimation is a low-level vision task that is especially relevant due to its wide range of applications in the real world. Many of the best motion estimation algorithms include some of the features that are found in mammalians, which would demand huge computational resources and therefore are not usually available in real-time. In this paper we present a novel bioinspired sensor based on the synergy between optical flow and orthogonal variant moments. The bioinspired sensor has been designed for Very Large Scale Integration (VLSI) using properties of the mammalian cortical motion pathway. This sensor combines low-level primitives (optical flow and image moments) in order to produce a mid-level vision abstraction layer. The results are described trough experiments showing the validity of the proposed system and an analysis of the computational resources and performance of the applied algorithms. PMID:22164069
FPGA-based multimodal embedded sensor system integrating low- and mid-level vision.
Botella, Guillermo; Martín H, José Antonio; Santos, Matilde; Meyer-Baese, Uwe
2011-01-01
Motion estimation is a low-level vision task that is especially relevant due to its wide range of applications in the real world. Many of the best motion estimation algorithms include some of the features that are found in mammalians, which would demand huge computational resources and therefore are not usually available in real-time. In this paper we present a novel bioinspired sensor based on the synergy between optical flow and orthogonal variant moments. The bioinspired sensor has been designed for Very Large Scale Integration (VLSI) using properties of the mammalian cortical motion pathway. This sensor combines low-level primitives (optical flow and image moments) in order to produce a mid-level vision abstraction layer. The results are described trough experiments showing the validity of the proposed system and an analysis of the computational resources and performance of the applied algorithms.
2009-08-01
in engine technology 7 VS. • Military demand is growing for FADEC & control systems with expert system embedded in the S/W for fault tolerance...leverage commercial FADECs & control systems S/W & H/W. •Modular / Universal/Distributed design can reduce development time and cost. S/W could offer...baseline for military-qualified FADECs . •To promote dual use, the services must recognize the similarities between commercial applications & military
Yang, Fan; Paindavoine, M
2003-01-01
This paper describes a real time vision system that allows us to localize faces in video sequences and verify their identity. These processes are image processing techniques based on the radial basis function (RBF) neural network approach. The robustness of this system has been evaluated quantitatively on eight video sequences. We have adapted our model for an application of face recognition using the Olivetti Research Laboratory (ORL), Cambridge, UK, database so as to compare the performance against other systems. We also describe three hardware implementations of our model on embedded systems based on the field programmable gate array (FPGA), zero instruction set computer (ZISC) chips, and digital signal processor (DSP) TMS320C62, respectively. We analyze the algorithm complexity and present results of hardware implementations in terms of the resources used and processing speed. The success rates of face tracking and identity verification are 92% (FPGA), 85% (ZISC), and 98.2% (DSP), respectively. For the three embedded systems, the processing speeds for images size of 288 /spl times/ 352 are 14 images/s, 25 images/s, and 4.8 images/s, respectively.
Transition in Gas Turbine Control System Architecture: Modular, Distributed, and Embedded
NASA Technical Reports Server (NTRS)
Culley, Dennis
2010-01-01
Controls systems are an increasingly important component of turbine-engine system technology. However, as engines become more capable, the control system itself becomes ever more constrained by the inherent environmental conditions of the engine; a relationship forced by the continued reliance on commercial electronics technology. A revolutionary change in the architecture of turbine-engine control systems will change this paradigm and result in fully distributed engine control systems. Initially, the revolution will begin with the physical decoupling of the control law processor from the hostile engine environment using a digital communications network and engine-mounted high temperature electronics requiring little or no thermal control. The vision for the evolution of distributed control capability from this initial implementation to fully distributed and embedded control is described in a roadmap and implementation plan. The development of this plan is the result of discussions with government and industry stakeholders
Synthetic vision in the cockpit: 3D systems for general aviation
NASA Astrophysics Data System (ADS)
Hansen, Andrew J.; Rybacki, Richard M.; Smith, W. Garth
2001-08-01
Synthetic vision has the potential to improve safety in aviation through better pilot situational awareness and enhanced navigational guidance. The technological advances enabling synthetic vision are GPS based navigation (position and attitude) systems and efficient graphical systems for rendering 3D displays in the cockpit. A benefit for military, commercial, and general aviation platforms alike is the relentless drive to miniaturize computer subsystems. Processors, data storage, graphical and digital signal processing chips, RF circuitry, and bus architectures are at or out-pacing Moore's Law with the transition to mobile computing and embedded systems. The tandem of fundamental GPS navigation services such as the US FAA's Wide Area and Local Area Augmentation Systems (WAAS) and commercially viable mobile rendering systems puts synthetic vision well with the the technological reach of general aviation. Given the appropriate navigational inputs, low cost and power efficient graphics solutions are capable of rendering a pilot's out-the-window view into visual databases with photo-specific imagery and geo-specific elevation and feature content. Looking beyond the single airframe, proposed aviation technologies such as ADS-B would provide a communication channel for bringing traffic information on-board and into the cockpit visually via the 3D display for additional pilot awareness. This paper gives a view of current 3D graphics system capability suitable for general aviation and presents a potential road map following the current trends.
2006-01-01
enabling technologies such as built-in-test, advanced health monitoring algorithms, reliability and component aging models, prognostics methods, and...deployment and acceptance. This framework and vision is consistent with the onboard PHM ( Prognostic and Health Management) as well as advanced... monitored . In addition to the prognostic forecasting capabilities provided by monitoring system power, multiple confounding errors by electronic
A Low-Power High-Speed Smart Sensor Design for Space Exploration Missions
NASA Technical Reports Server (NTRS)
Fang, Wai-Chi
1997-01-01
A low-power high-speed smart sensor system based on a large format active pixel sensor (APS) integrated with a programmable neural processor for space exploration missions is presented. The concept of building an advanced smart sensing system is demonstrated by a system-level microchip design that is composed with an APS sensor, a programmable neural processor, and an embedded microprocessor in a SOI CMOS technology. This ultra-fast smart sensor system-on-a-chip design mimics what is inherent in biological vision systems. Moreover, it is programmable and capable of performing ultra-fast machine vision processing in all levels such as image acquisition, image fusion, image analysis, scene interpretation, and control functions. The system provides about one tera-operation-per-second computing power which is a two order-of-magnitude increase over that of state-of-the-art microcomputers. Its high performance is due to massively parallel computing structures, high data throughput rates, fast learning capabilities, and advanced VLSI system-on-a-chip implementation.
Low-Latency Embedded Vision Processor (LLEVS)
2016-03-01
26 3.2.3 Task 3 Projected Performance Analysis of FPGA- based Vision Processor ........... 31 3.2.3.1 Algorithms Latency Analysis ...Programmable Gate Array Custom Hardware for Real- Time Multiresolution Analysis . ............................................... 35...conduct data analysis for performance projections. The data acquired through measurements , simulation and estimation provide the requisite platform for
Low Vision: Assessment and Training for Mobility.
ERIC Educational Resources Information Center
Dodds, Allan G.; Davis, Denis P.
1987-01-01
To develop a battery of tasks to predict and improve mobility performance, a series of functional vision tasks (texural shearing, degraded images, embedded figures, and parafoveal attention) were generated by a microcomputer. Sixty visually impaired subjects given either computerized task training or real-life training improved their low vision…
Real-time depth processing for embedded platforms
NASA Astrophysics Data System (ADS)
Rahnama, Oscar; Makarov, Aleksej; Torr, Philip
2017-05-01
Obtaining depth information of a scene is an important requirement in many computer-vision and robotics applications. For embedded platforms, passive stereo systems have many advantages over their active counterparts (i.e. LiDAR, Infrared). They are power efficient, cheap, robust to lighting conditions and inherently synchronized to the RGB images of the scene. However, stereo depth estimation is a computationally expensive task that operates over large amounts of data. For embedded applications which are often constrained by power consumption, obtaining accurate results in real-time is a challenge. We demonstrate a computationally and memory efficient implementation of a stereo block-matching algorithm in FPGA. The computational core achieves a throughput of 577 fps at standard VGA resolution whilst consuming less than 3 Watts of power. The data is processed using an in-stream approach that minimizes memory-access bottlenecks and best matches the raster scan readout of modern digital image sensors.
Beyond speculative robot ethics: a vision assessment study on the future of the robotic caretaker.
van der Plas, Arjanna; Smits, Martijntje; Wehrmann, Caroline
2010-11-01
In this article we develop a dialogue model for robot technology experts and designated users to discuss visions on the future of robotics in long-term care. Our vision assessment study aims for more distinguished and more informed visions on future robots. Surprisingly, our experiment also led to some promising co-designed robot concepts in which jointly articulated moral guidelines are embedded. With our model, we think to have designed an interesting response on a recent call for a less speculative ethics of technology by encouraging discussions about the quality of positive and negative visions on the future of robotics.
A combined vision-inertial fusion approach for 6-DoF object pose estimation
NASA Astrophysics Data System (ADS)
Li, Juan; Bernardos, Ana M.; Tarrío, Paula; Casar, José R.
2015-02-01
The estimation of the 3D position and orientation of moving objects (`pose' estimation) is a critical process for many applications in robotics, computer vision or mobile services. Although major research efforts have been carried out to design accurate, fast and robust indoor pose estimation systems, it remains as an open challenge to provide a low-cost, easy to deploy and reliable solution. Addressing this issue, this paper describes a hybrid approach for 6 degrees of freedom (6-DoF) pose estimation that fuses acceleration data and stereo vision to overcome the respective weaknesses of single technology approaches. The system relies on COTS technologies (standard webcams, accelerometers) and printable colored markers. It uses a set of infrastructure cameras, located to have the object to be tracked visible most of the operation time; the target object has to include an embedded accelerometer and be tagged with a fiducial marker. This simple marker has been designed for easy detection and segmentation and it may be adapted to different service scenarios (in shape and colors). Experimental results show that the proposed system provides high accuracy, while satisfactorily dealing with the real-time constraints.
Design and Implementation of Embedded Computer Vision Systems Based on Particle Filters
2010-01-01
for hardware/software implementa- tion of multi-dimensional particle filter application and we explore this in the third application which is a 3D...methodology for hardware/software implementation of multi-dimensional particle filter application and we explore this in the third application which is a...and hence multiprocessor implementation of parti- cle filters is an important option to examine. A significant body of work exists on optimizing generic
Framing ICT, Teachers and Learners in Australian School Education ICT Policy
ERIC Educational Resources Information Center
Jordan, Kathy
2011-01-01
It is well over 20 years since information and communication technologies (ICT) was first included as part of a future vision for Australia's schools. Since this time numerous national policies have been developed, which collectively articulate an official discourse in support of a vision for ICT to be embedded in our schools, and routinely used…
Azzopardi, George; Petkov, Nicolai
2014-01-01
The remarkable abilities of the primate visual system have inspired the construction of computational models of some visual neurons. We propose a trainable hierarchical object recognition model, which we call S-COSFIRE (S stands for Shape and COSFIRE stands for Combination Of Shifted FIlter REsponses) and use it to localize and recognize objects of interests embedded in complex scenes. It is inspired by the visual processing in the ventral stream (V1/V2 → V4 → TEO). Recognition and localization of objects embedded in complex scenes is important for many computer vision applications. Most existing methods require prior segmentation of the objects from the background which on its turn requires recognition. An S-COSFIRE filter is automatically configured to be selective for an arrangement of contour-based features that belong to a prototype shape specified by an example. The configuration comprises selecting relevant vertex detectors and determining certain blur and shift parameters. The response is computed as the weighted geometric mean of the blurred and shifted responses of the selected vertex detectors. S-COSFIRE filters share similar properties with some neurons in inferotemporal cortex, which provided inspiration for this work. We demonstrate the effectiveness of S-COSFIRE filters in two applications: letter and keyword spotting in handwritten manuscripts and object spotting in complex scenes for the computer vision system of a domestic robot. S-COSFIRE filters are effective to recognize and localize (deformable) objects in images of complex scenes without requiring prior segmentation. They are versatile trainable shape detectors, conceptually simple and easy to implement. The presented hierarchical shape representation contributes to a better understanding of the brain and to more robust computer vision algorithms. PMID:25126068
Strategic analytics: towards fully embedding evidence in healthcare decision-making.
Garay, Jason; Cartagena, Rosario; Esensoy, Ali Vahit; Handa, Kiren; Kane, Eli; Kaw, Neal; Sadat, Somayeh
2015-01-01
Cancer Care Ontario (CCO) has implemented multiple information technology solutions and collected health-system data to support its programs. There is now an opportunity to leverage these data and perform advanced end-to-end analytics that inform decisions around improving health-system performance. In 2014, CCO engaged in an extensive assessment of its current data capacity and capability, with the intent to drive increased use of data for evidence-based decision-making. The breadth and volume of data at CCO uniquely places the organization to contribute to not only system-wide operational reporting, but more advanced modelling of current and future state system management and planning. In 2012, CCO established a strategic analytics practice to assist the agency's programs contextualize and inform key business decisions and to provide support through innovative predictive analytics solutions. This paper describes the organizational structure, services and supporting operations that have enabled progress to date, and discusses the next steps towards the vision of embedding evidence fully into healthcare decision-making. Copyright © 2014 Longwoods Publishing.
Evolution of Embedded Processing for Wide Area Surveillance
2014-01-01
future vision . 15. SUBJECT TERMS Embedded processing; high performance computing; general-purpose graphical processing units (GPGPUs) 16. SECURITY...recon- naissance (ISR) mission capabilities. The capabilities these advancements are achieving include the ability to provide persistent all...fighters to support and positively affect their mission . Significant improvements in high-performance computing (HPC) technology make it possible to
System of fabricating a flexible electrode array
Krulevitch, Peter; Polla, Dennis L.; Maghribi, Mariam N.; Hamilton, Julie; Humayun, Mark S.; Weiland, James D.
2010-10-12
An image is captured or otherwise converted into a signal in an artificial vision system. The signal is transmitted to the retina utilizing an implant. The implant consists of a polymer substrate made of a compliant material such as poly(dimethylsiloxane) or PDMS. The polymer substrate is conformable to the shape of the retina. Electrodes and conductive leads are embedded in the polymer substrate. The conductive leads and the electrodes transmit the signal representing the image to the cells in the retina. The signal representing the image stimulates cells in the retina.
System of fabricating a flexible electrode array
Krulevitch, Peter [Pleasanton, CA; Polla, Dennis L [Roseville, MN; Maghribi, Mariam N [Davis, CA; Hamilton, Julie [Tracy, CA; Humayun, Mark S [La Canada, CA; Weiland, James D [Valencia, CA
2012-01-28
An image is captured or otherwise converted into a signal in an artificial vision system. The signal is transmitted to the retina utilizing an implant. The implant consists of a polymer substrate made of a compliant material such as poly(dimethylsiloxane) or PDMS. The polymer substrate is conformable to the shape of the retina. Electrodes and conductive leads are embedded in the polymer substrate. The conductive leads and the electrodes transmit the signal representing the image to the cells in the retina. The signal representing the image stimulates cells in the retina.
Embedded Systems and TensorFlow Frameworks as Assistive Technology Solutions.
Mulfari, Davide; Palla, Alessandro; Fanucci, Luca
2017-01-01
In the field of deep learning, this paper presents the design of a wearable computer vision system for visually impaired users. The Assistive Technology solution exploits a powerful single board computer and smart glasses with a camera in order to allow its user to explore the objects within his surrounding environment, while it employs Google TensorFlow machine learning framework in order to real time classify the acquired stills. Therefore the proposed aid can increase the awareness of the explored environment and it interacts with its user by means of audio messages.
Intelligent manipulation technique for multi-branch robotic systems
NASA Technical Reports Server (NTRS)
Chen, Alexander Y. K.; Chen, Eugene Y. S.
1990-01-01
New analytical development in kinematics planning is reported. The INtelligent KInematics Planner (INKIP) consists of the kinematics spline theory and the adaptive logic annealing process. Also, a novel framework of robot learning mechanism is introduced. The FUzzy LOgic Self Organized Neural Networks (FULOSONN) integrates fuzzy logic in commands, control, searching, and reasoning, the embedded expert system for nominal robotics knowledge implementation, and the self organized neural networks for the dynamic knowledge evolutionary process. Progress on the mechanical construction of SRA Advanced Robotic System (SRAARS) and the real time robot vision system is also reported. A decision was made to incorporate the Local Area Network (LAN) technology in the overall communication system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jha, Sumit Kumar; Pullum, Laura L; Ramanathan, Arvind
Embedded intelligent systems ranging from tiny im- plantable biomedical devices to large swarms of autonomous un- manned aerial systems are becoming pervasive in our daily lives. While we depend on the flawless functioning of such intelligent systems, and often take their behavioral correctness and safety for granted, it is notoriously difficult to generate test cases that expose subtle errors in the implementations of machine learning algorithms. Hence, the validation of intelligent systems is usually achieved by studying their behavior on representative data sets, using methods such as cross-validation and bootstrapping.In this paper, we present a new testing methodology for studyingmore » the correctness of intelligent systems. Our approach uses symbolic decision procedures coupled with statistical hypothesis testing to. We also use our algorithm to analyze the robustness of a human detection algorithm built using the OpenCV open-source computer vision library. We show that the human detection implementation can fail to detect humans in perturbed video frames even when the perturbations are so small that the corresponding frames look identical to the naked eye.« less
2015-12-04
from back-office big - data analytics to fieldable hot-spot systems providing storage-processing-communication services for off- grid sensors. Speed...and power efficiency are the key metrics. Current state-of-the art approaches for big - data aim toward scaling out to many computers to meet...pursued within Lincoln Laboratory as well as external sponsors. Our vision is to bring new capabilities in big - data and internet-of-things applications
Deniz, Oscar; Vallez, Noelia; Espinosa-Aranda, Jose L; Rico-Saavedra, Jose M; Parra-Patino, Javier; Bueno, Gloria; Moloney, David; Dehghani, Alireza; Dunne, Aubrey; Pagani, Alain; Krauss, Stephan; Reiser, Ruben; Waeny, Martin; Sorci, Matteo; Llewellynn, Tim; Fedorczak, Christian; Larmoire, Thierry; Herbst, Marco; Seirafi, Andre; Seirafi, Kasra
2017-05-21
Embedded systems control and monitor a great deal of our reality. While some "classic" features are intrinsically necessary, such as low power consumption, rugged operating ranges, fast response and low cost, these systems have evolved in the last few years to emphasize connectivity functions, thus contributing to the Internet of Things paradigm. A myriad of sensing/computing devices are being attached to everyday objects, each able to send and receive data and to act as a unique node in the Internet. Apart from the obvious necessity to process at least some data at the edge (to increase security and reduce power consumption and latency), a major breakthrough will arguably come when such devices are endowed with some level of autonomous "intelligence". Intelligent computing aims to solve problems for which no efficient exact algorithm can exist or for which we cannot conceive an exact algorithm. Central to such intelligence is Computer Vision (CV), i.e., extracting meaning from images and video. While not everything needs CV, visual information is the richest source of information about the real world: people, places and things. The possibilities of embedded CV are endless if we consider new applications and technologies, such as deep learning, drones, home robotics, intelligent surveillance, intelligent toys, wearable cameras, etc. This paper describes the Eyes of Things (EoT) platform, a versatile computer vision platform tackling those challenges and opportunities.
Deniz, Oscar; Vallez, Noelia; Espinosa-Aranda, Jose L.; Rico-Saavedra, Jose M.; Parra-Patino, Javier; Bueno, Gloria; Moloney, David; Dehghani, Alireza; Dunne, Aubrey; Pagani, Alain; Krauss, Stephan; Reiser, Ruben; Waeny, Martin; Sorci, Matteo; Llewellynn, Tim; Fedorczak, Christian; Larmoire, Thierry; Herbst, Marco; Seirafi, Andre; Seirafi, Kasra
2017-01-01
Embedded systems control and monitor a great deal of our reality. While some “classic” features are intrinsically necessary, such as low power consumption, rugged operating ranges, fast response and low cost, these systems have evolved in the last few years to emphasize connectivity functions, thus contributing to the Internet of Things paradigm. A myriad of sensing/computing devices are being attached to everyday objects, each able to send and receive data and to act as a unique node in the Internet. Apart from the obvious necessity to process at least some data at the edge (to increase security and reduce power consumption and latency), a major breakthrough will arguably come when such devices are endowed with some level of autonomous “intelligence”. Intelligent computing aims to solve problems for which no efficient exact algorithm can exist or for which we cannot conceive an exact algorithm. Central to such intelligence is Computer Vision (CV), i.e., extracting meaning from images and video. While not everything needs CV, visual information is the richest source of information about the real world: people, places and things. The possibilities of embedded CV are endless if we consider new applications and technologies, such as deep learning, drones, home robotics, intelligent surveillance, intelligent toys, wearable cameras, etc. This paper describes the Eyes of Things (EoT) platform, a versatile computer vision platform tackling those challenges and opportunities. PMID:28531141
A traffic situation analysis system
NASA Astrophysics Data System (ADS)
Sidla, Oliver; Rosner, Marcin
2011-01-01
The observation and monitoring of traffic with smart visions systems for the purpose of improving traffic safety has a big potential. For example embedded vision systems built into vehicles can be used as early warning systems, or stationary camera systems can modify the switching frequency of signals at intersections. Today the automated analysis of traffic situations is still in its infancy - the patterns of vehicle motion and pedestrian flow in an urban environment are too complex to be fully understood by a vision system. We present steps towards such a traffic monitoring system which is designed to detect potentially dangerous traffic situations, especially incidents in which the interaction of pedestrians and vehicles might develop into safety critical encounters. The proposed system is field-tested at a real pedestrian crossing in the City of Vienna for the duration of one year. It consists of a cluster of 3 smart cameras, each of which is built from a very compact PC hardware system in an outdoor capable housing. Two cameras run vehicle detection software including license plate detection and recognition, one camera runs a complex pedestrian detection and tracking module based on the HOG detection principle. As a supplement, all 3 cameras use additional optical flow computation in a low-resolution video stream in order to estimate the motion path and speed of objects. This work describes the foundation for all 3 different object detection modalities (pedestrians, vehi1cles, license plates), and explains the system setup and its design.
EnVision+, a new dextran polymer-based signal enhancement technique for in situ hybridization (ISH).
Wiedorn, K H; Goldmann, T; Henne, C; Kühl, H; Vollmer, E
2001-09-01
Seventy paraffin-embedded cervical biopsy specimens and condylomata were tested for the presence of human papillomavirus (HPV) by conventional in situ hybridization (ISH) and ISH with subsequent signal amplification. Signal amplification was performed either by a commercial biotinyl-tyramide-based detection system [GenPoint (GP)] or by the novel two-layer dextran polymer visualization system EnVision+ (EV), in which both EV-horseradish peroxidase (EV-HRP) and EV-alkaline phosphatase (EV-AP) were applied. We could demonstrate for the first time, that EV in combination with preceding ISH results in a considerable increase in signal intensity and sensitivity without loss of specificity compared to conventional ISH. Compared to GP, EV revealed a somewhat lower sensitivity, as measured by determination of the integrated optical density (IOD) of the positively stained cells. However, EV is easier to perform, requires a shorter assay time, and does not raise the background problems that may be encountered with biotinyl-tyramide-based amplification systems. (J Histochem Cytochem 49:1067-1071, 2001)
Validation of a stereo camera system to quantify brain deformation due to breathing and pulsatility.
Faria, Carlos; Sadowsky, Ofri; Bicho, Estela; Ferrigno, Giancarlo; Joskowicz, Leo; Shoham, Moshe; Vivanti, Refael; De Momi, Elena
2014-11-01
A new stereo vision system is presented to quantify brain shift and pulsatility in open-skull neurosurgeries. The system is endowed with hardware and software synchronous image acquisition with timestamp embedding in the captured images, a brain surface oriented feature detection, and a tracking subroutine robust to occlusions and outliers. A validation experiment for the stereo vision system was conducted against a gold-standard optical tracking system, Optotrak CERTUS. A static and dynamic analysis of the stereo camera tracking error was performed tracking a customized object in different positions, orientations, linear, and angular speeds. The system is able to detect an immobile object position and orientation with a maximum error of 0.5 mm and 1.6° in all depth of field, and tracking a moving object until 3 mm/s with a median error of 0.5 mm. Three stereo video acquisitions were recorded from a patient, immediately after the craniotomy. The cortical pulsatile motion was captured and is represented in the time and frequency domain. The amplitude of motion of the cloud of features' center of mass was inferior to 0.8 mm. Three distinct peaks are identified in the fast Fourier transform analysis related to the sympathovagal balance, breathing, and blood pressure with 0.03-0.05, 0.2, and 1 Hz, respectively. The stereo vision system presented is a precise and robust system to measure brain shift and pulsatility with an accuracy superior to other reported systems.
NASA Astrophysics Data System (ADS)
Shao, Yanhua; Mei, Yanying; Chu, Hongyu; Chang, Zhiyuan; He, Yuxuan; Zhan, Huayi
2018-04-01
Pedestrian detection (PD) is an important application domain in computer vision and pattern recognition. Unmanned Aerial Vehicles (UAVs) have become a major field of research in recent years. In this paper, an algorithm for a robust pedestrian detection method based on the combination of the infrared HOG (IR-HOG) feature and SVM is proposed for highly complex outdoor scenarios on the basis of airborne IR image sequences from UAV. The basic flow of our application operation is as follows. Firstly, the thermal infrared imager (TAU2-336), which was installed on our Outdoor Autonomous Searching (OAS) UAV, is used for taking pictures of the designated outdoor area. Secondly, image sequences collecting and processing were accomplished by using high-performance embedded system with Samsung ODROID-XU4 and Ubuntu as the core and operating system respectively, and IR-HOG features were extracted. Finally, the SVM is used to train the pedestrian classifier. Experiment show that, our method shows promising results under complex conditions including strong noise corruption, partial occlusion etc.
Design and implementation of a vision-based hovering and feature tracking algorithm for a quadrotor
NASA Astrophysics Data System (ADS)
Lee, Y. H.; Chahl, J. S.
2016-10-01
This paper demonstrates an approach to the vision-based control of the unmanned quadrotors for hover and object tracking. The algorithms used the Speed Up Robust Features (SURF) algorithm to detect objects. The pose of the object in the image was then calculated in order to pass the pose information to the flight controller. Finally, the flight controller steered the quadrotor to approach the object based on the calculated pose data. The above processes was run using standard onboard resources found in the 3DR Solo quadrotor in an embedded computing environment. The obtained results showed that the algorithm behaved well during its missions, tracking and hovering, although there were significant latencies due to low CPU performance of the onboard image processing system.
Precise positioning method for multi-process connecting based on binocular vision
NASA Astrophysics Data System (ADS)
Liu, Wei; Ding, Lichao; Zhao, Kai; Li, Xiao; Wang, Ling; Jia, Zhenyuan
2016-01-01
With the rapid development of aviation and aerospace, the demand for metal coating parts such as antenna reflector, eddy-current sensor and signal transmitter, etc. is more and more urgent. Such parts with varied feature dimensions, complex three-dimensional structures, and high geometric accuracy are generally fabricated by the combination of different manufacturing technology. However, it is difficult to ensure the machining precision because of the connection error between different processing methods. Therefore, a precise positioning method is proposed based on binocular micro stereo vision in this paper. Firstly, a novel and efficient camera calibration method for stereoscopic microscope is presented to solve the problems of narrow view field, small depth of focus and too many nonlinear distortions. Secondly, the extraction algorithms for law curve and free curve are given, and the spatial position relationship between the micro vision system and the machining system is determined accurately. Thirdly, a precise positioning system based on micro stereovision is set up and then embedded in a CNC machining experiment platform. Finally, the verification experiment of the positioning accuracy is conducted and the experimental results indicated that the average errors of the proposed method in the X and Y directions are 2.250 μm and 1.777 μm, respectively.
Using advanced computer vision algorithms on small mobile robots
NASA Astrophysics Data System (ADS)
Kogut, G.; Birchmore, F.; Biagtan Pacis, E.; Everett, H. R.
2006-05-01
The Technology Transfer project employs a spiral development process to enhance the functionality and autonomy of mobile robot systems in the Joint Robotics Program (JRP) Robotic Systems Pool by converging existing component technologies onto a transition platform for optimization. An example of this approach is the implementation of advanced computer vision algorithms on small mobile robots. We demonstrate the implementation and testing of the following two algorithms useful on mobile robots: 1) object classification using a boosted Cascade of classifiers trained with the Adaboost training algorithm, and 2) human presence detection from a moving platform. Object classification is performed with an Adaboost training system developed at the University of California, San Diego (UCSD) Computer Vision Lab. This classification algorithm has been used to successfully detect the license plates of automobiles in motion in real-time. While working towards a solution to increase the robustness of this system to perform generic object recognition, this paper demonstrates an extension to this application by detecting soda cans in a cluttered indoor environment. The human presence detection from a moving platform system uses a data fusion algorithm which combines results from a scanning laser and a thermal imager. The system is able to detect the presence of humans while both the humans and the robot are moving simultaneously. In both systems, the two aforementioned algorithms were implemented on embedded hardware and optimized for use in real-time. Test results are shown for a variety of environments.
Brewer, Margo
2016-09-01
Creating a vision (visioning) and sensemaking have been described as key leadership practices in the leadership literature. A vision provides clarity, motivation, and direction for staff, and is essential particularly in times of significant change. Closely related to visioning is sensemaking (the organisation of stimuli into a framework allowing people to understand, explain, attribute, extrapolate, and predict). The application of these strategies to leadership within the interprofessional field is yet to be scrutinised. This study examines an interprofessional capability framework as a visioning and sensemaking tool for use by leaders within a university health science curriculum. Interviews with 11 faculty members revealed that the framework had been embedded across multiple years and contexts within the curriculum. Furthermore, a range of responses to the framework were evoked in relation to its use to make sense of interprofessional practice and to provide a vision, guide, and focus for faculty. Overall the findings indicate that the framework can function as both a visioning and sensemaking tool.
Ouchi, M; Kinoshita, S
2015-01-01
Purpose To evaluate the postoperative outcomes of cataract eyes complicated with coexisting ocular pathologies that underwent implantation of a refractive multifocal intraocular lens (MIOL) with a surface-embedded near section. Methods LENTIS MPlus (Oculentis GmbH) refractive MIOLs were implanted in 15 eyes with ocular pathologies other than cataract (ie, six high-myopia eyes with an axial length longer than 28 mm, two fundus albipunctatus eyes, two branch retinal-vein occlusion eyes, four glaucoma eyes (one with high myopia), and two keratoconus eyes). Uncorrected or corrected distance and near visual acuity (VA) (UDVA, UNVA, CDVA, and CNVA), contrast sensitivity, and defocus curve were measured at 1 day and 6 months postoperatively, and each patient completed a 6-month postoperative questionnaire regarding vision quality and eyeglass use. Results Thirteen eyes (87%) registered 0 or better in CDVA and 12 eyes (73%) registered better than 0 in CNVA. Contrast sensitivity in the eyes of all patients was comparable to that of normal healthy subjects. No patient required eyeglasses for distance vision, but three patients (20%) required them for near vision. No patient reported poor or very poor vision quality. Conclusion With careful case selection, sectorial refractive MIOL implantation is effective for treating cataract eyes complicated with ocular pathologies. PMID:25744442
Ouchi, M; Kinoshita, S
2015-05-01
To evaluate the postoperative outcomes of cataract eyes complicated with coexisting ocular pathologies that underwent implantation of a refractive multifocal intraocular lens (MIOL) with a surface-embedded near section. LENTIS MPlus (Oculentis GmbH) refractive MIOLs were implanted in 15 eyes with ocular pathologies other than cataract (ie, six high-myopia eyes with an axial length longer than 28 mm, two fundus albipunctatus eyes, two branch retinal-vein occlusion eyes, four glaucoma eyes (one with high myopia), and two keratoconus eyes). Uncorrected or corrected distance and near visual acuity (VA) (UDVA, UNVA, CDVA, and CNVA), contrast sensitivity, and defocus curve were measured at 1 day and 6 months postoperatively, and each patient completed a 6-month postoperative questionnaire regarding vision quality and eyeglass use. Thirteen eyes (87%) registered 0 or better in CDVA and 12 eyes (73%) registered better than 0 in CNVA. Contrast sensitivity in the eyes of all patients was comparable to that of normal healthy subjects. No patient required eyeglasses for distance vision, but three patients (20%) required them for near vision. No patient reported poor or very poor vision quality. With careful case selection, sectorial refractive MIOL implantation is effective for treating cataract eyes complicated with ocular pathologies.
VISIONS - Vista Star Formation Atlas
NASA Astrophysics Data System (ADS)
Meingast, Stefan; Alves, J.; Boui, H.; Ascenso, J.
2017-06-01
In this talk I will present the new ESO public survey VISIONS. Starting in early 2017 we will use the ESO VISTA survey telescope in a 550 h long programme to map the largest molecular cloud complexes within 500 pc in a multi-epoch program. The survey is optimized for measuring the proper motions of young stellar objects invisible to Gaia and mapping the cloud-structure with extinction. VISIONS will address a series of ISM topics ranging from the connection of dense cores to YSOs and the dynamical evolution of embedded clusters to variations in the reddening law on both small and large scales.
Grossberg, Stephen
2015-09-24
This article provides an overview of neural models of synaptic learning and memory whose expression in adaptive behavior depends critically on the circuits and systems in which the synapses are embedded. It reviews Adaptive Resonance Theory, or ART, models that use excitatory matching and match-based learning to achieve fast category learning and whose learned memories are dynamically stabilized by top-down expectations, attentional focusing, and memory search. ART clarifies mechanistic relationships between consciousness, learning, expectation, attention, resonance, and synchrony. ART models are embedded in ARTSCAN architectures that unify processes of invariant object category learning, recognition, spatial and object attention, predictive remapping, and eye movement search, and that clarify how conscious object vision and recognition may fail during perceptual crowding and parietal neglect. The generality of learned categories depends upon a vigilance process that is regulated by acetylcholine via the nucleus basalis. Vigilance can get stuck at too high or too low values, thereby causing learning problems in autism and medial temporal amnesia. Similar synaptic learning laws support qualitatively different behaviors: Invariant object category learning in the inferotemporal cortex; learning of grid cells and place cells in the entorhinal and hippocampal cortices during spatial navigation; and learning of time cells in the entorhinal-hippocampal system during adaptively timed conditioning, including trace conditioning. Spatial and temporal processes through the medial and lateral entorhinal-hippocampal system seem to be carried out with homologous circuit designs. Variations of a shared laminar neocortical circuit design have modeled 3D vision, speech perception, and cognitive working memory and learning. A complementary kind of inhibitory matching and mismatch learning controls movement. This article is part of a Special Issue entitled SI: Brain and Memory. Copyright © 2014 Elsevier B.V. All rights reserved.
Fabrication of Advanced Thermoelectric Materials by Hierarchical Nanovoid Generation
NASA Technical Reports Server (NTRS)
Park, Yeonjoon (Inventor); Elliott, James R. (Inventor); Stoakley, Diane M. (Inventor); Chu, Sang-Hyon (Inventor); King, Glen C. (Inventor); Kim, Jae-Woo (Inventor); Choi, Sang Hyouk (Inventor); Lillehei, Peter T. (Inventor)
2011-01-01
A novel method to prepare an advanced thermoelectric material has hierarchical structures embedded with nanometer-sized voids which are key to enhancement of the thermoelectric performance. Solution-based thin film deposition technique enables preparation of stable film of thermoelectric material and void generator (voigen). A subsequent thermal process creates hierarchical nanovoid structure inside the thermoelectric material. Potential application areas of this advanced thermoelectric material with nanovoid structure are commercial applications (electronics cooling), medical and scientific applications (biological analysis device, medical imaging systems), telecommunications, and defense and military applications (night vision equipments).
Terabytes to Megabytes: Data Reduction Onsite for Remote Limited Bandwidth Systems
NASA Astrophysics Data System (ADS)
Hirsch, M.
2016-12-01
Inexpensive, battery-powerable embedded computer systems such as the Intel Edison and Raspberry Pi have inspired makers of all ages to create and deploy sensor systems. Geoscientists are also leveraging such inexpensive embedded computers for solar-powered or other low-resource utilization systems for ionospheric observation. We have developed OpenCV-based machine vision algorithms to reduce terabytes per night of high-speed aurora video data down to megabytes of data to aid in automated sifting and retention of high-value data from the mountains of less interesting data. Given prohibitively expensive data connections in many parts of the world, such techniques may be generalizable to more than just the auroral video and passive FM radar implemented so far. After the automated algorithm decides which data to keep, automated upload and distribution techniques are relevant to avoid excessive delay and consumption of researcher time. Open-source collaborative software development enables data audiences from experts through citizen enthusiasts to access the data and make exciting plots. Open software and data aids in cross-disciplinary collaboration opportunities, STEM outreach and increasing public awareness of the contributions each geoscience data collection system makes.
Parametric dense stereovision implementation on a system-on chip (SoC).
Gardel, Alfredo; Montejo, Pablo; García, Jorge; Bravo, Ignacio; Lázaro, José L
2012-01-01
This paper proposes a novel hardware implementation of a dense recovery of stereovision 3D measurements. Traditionally 3D stereo systems have imposed the maximum number of stereo correspondences, introducing a large restriction on artificial vision algorithms. The proposed system-on-chip (SoC) provides great performance and efficiency, with a scalable architecture available for many different situations, addressing real time processing of stereo image flow. Using double buffering techniques properly combined with pipelined processing, the use of reconfigurable hardware achieves a parametrisable SoC which gives the designer the opportunity to decide its right dimension and features. The proposed architecture does not need any external memory because the processing is done as image flow arrives. Our SoC provides 3D data directly without the storage of whole stereo images. Our goal is to obtain high processing speed while maintaining the accuracy of 3D data using minimum resources. Configurable parameters may be controlled by later/parallel stages of the vision algorithm executed on an embedded processor. Considering hardware FPGA clock of 100 MHz, image flows up to 50 frames per second (fps) of dense stereo maps of more than 30,000 depth points could be obtained considering 2 Mpix images, with a minimum initial latency. The implementation of computer vision algorithms on reconfigurable hardware, explicitly low level processing, opens up the prospect of its use in autonomous systems, and they can act as a coprocessor to reconstruct 3D images with high density information in real time.
Towards Guided Underwater Survey Using Light Visual Odometry
NASA Astrophysics Data System (ADS)
Nawaf, M. M.; Drap, P.; Royer, J. P.; Merad, D.; Saccone, M.
2017-02-01
A light distributed visual odometry method adapted to embedded hardware platform is proposed. The aim is to guide underwater surveys in real time. We rely on image stream captured using portable stereo rig attached to the embedded system. Taken images are analyzed on the fly to assess image quality in terms of sharpness and lightness, so that immediate actions can be taken accordingly. Images are then transferred over the network to another processing unit to compute the odometry. Relying on a standard ego-motion estimation approach, we speed up points matching between image quadruplets using a low level points matching scheme relying on fast Harris operator and template matching that is invariant to illumination changes. We benefit from having the light source attached to the hardware platform to estimate a priori rough depth belief following light divergence over distance low. The rough depth is used to limit points correspondence search zone as it linearly depends on disparity. A stochastic relative bundle adjustment is applied to minimize re-projection errors. The evaluation of the proposed method demonstrates the gain in terms of computation time w.r.t. other approaches that use more sophisticated feature descriptors. The built system opens promising areas for further development and integration of embedded computer vision techniques.
A real-time monitoring system for night glare protection
NASA Astrophysics Data System (ADS)
Ma, Jun; Ni, Xuxiang
2010-11-01
When capturing a dark scene with a high bright object, the monitoring camera will be saturated in some regions and the details will be lost in and near these saturated regions because of the glare vision. This work aims at developing a real-time night monitoring system. The system can decrease the influence of the glare vision and gain more details from the ordinary camera when exposing a high-contrast scene like a car with its headlight on during night. The system is made up of spatial light modulator (The liquid crystal on silicon: LCoS), image sensor (CCD), imaging lens and DSP. LCoS, a reflective liquid crystal, can modular the intensity of reflective light at every pixel as a digital device. Through modulation function of LCoS, CCD is exposed with sub-region. With the control of DSP, the light intensity is decreased to minimum in the glare regions, and the light intensity is negative feedback modulated based on PID theory in other regions. So that more details of the object will be imaging on CCD and the glare protection of monitoring system is achieved. In experiments, the feedback is controlled by the embedded system based on TI DM642. Experiments shows: this feedback modulation method not only reduces the glare vision to improve image quality, but also enhances the dynamic range of image. The high-quality and high dynamic range image is real-time captured at 30hz. The modulation depth of LCoS determines how strong the glare can be removed.
Komorkiewicz, Mateusz; Kryjak, Tomasz; Gorgon, Marek
2014-01-01
This article presents an efficient hardware implementation of the Horn-Schunck algorithm that can be used in an embedded optical flow sensor. An architecture is proposed, that realises the iterative Horn-Schunck algorithm in a pipelined manner. This modification allows to achieve data throughput of 175 MPixels/s and makes processing of Full HD video stream (1, 920 × 1, 080 @ 60 fps) possible. The structure of the optical flow module as well as pre- and post-filtering blocks and a flow reliability computation unit is described in details. Three versions of optical flow modules, with different numerical precision, working frequency and obtained results accuracy are proposed. The errors caused by switching from floating- to fixed-point computations are also evaluated. The described architecture was tested on popular sequences from an optical flow dataset of the Middlebury University. It achieves state-of-the-art results among hardware implementations of single scale methods. The designed fixed-point architecture achieves performance of 418 GOPS with power efficiency of 34 GOPS/W. The proposed floating-point module achieves 103 GFLOPS, with power efficiency of 24 GFLOPS/W. Moreover, a 100 times speedup compared to a modern CPU with SIMD support is reported. A complete, working vision system realized on Xilinx VC707 evaluation board is also presented. It is able to compute optical flow for Full HD video stream received from an HDMI camera in real-time. The obtained results prove that FPGA devices are an ideal platform for embedded vision systems. PMID:24526303
Komorkiewicz, Mateusz; Kryjak, Tomasz; Gorgon, Marek
2014-02-12
This article presents an efficient hardware implementation of the Horn-Schunck algorithm that can be used in an embedded optical flow sensor. An architecture is proposed, that realises the iterative Horn-Schunck algorithm in a pipelined manner. This modification allows to achieve data throughput of 175 MPixels/s and makes processing of Full HD video stream (1; 920 × 1; 080 @ 60 fps) possible. The structure of the optical flow module as well as pre- and post-filtering blocks and a flow reliability computation unit is described in details. Three versions of optical flow modules, with different numerical precision, working frequency and obtained results accuracy are proposed. The errors caused by switching from floating- to fixed-point computations are also evaluated. The described architecture was tested on popular sequences from an optical flow dataset of the Middlebury University. It achieves state-of-the-art results among hardware implementations of single scale methods. The designed fixed-point architecture achieves performance of 418 GOPS with power efficiency of 34 GOPS/W. The proposed floating-point module achieves 103 GFLOPS, with power efficiency of 24 GFLOPS/W. Moreover, a 100 times speedup compared to a modern CPU with SIMD support is reported. A complete, working vision system realized on Xilinx VC707 evaluation board is also presented. It is able to compute optical flow for Full HD video stream received from an HDMI camera in real-time. The obtained results prove that FPGA devices are an ideal platform for embedded vision systems.
A cognitive approach to vision for a mobile robot
NASA Astrophysics Data System (ADS)
Benjamin, D. Paul; Funk, Christopher; Lyons, Damian
2013-05-01
We describe a cognitive vision system for a mobile robot. This system works in a manner similar to the human vision system, using saccadic, vergence and pursuit movements to extract information from visual input. At each fixation, the system builds a 3D model of a small region, combining information about distance, shape, texture and motion. These 3D models are embedded within an overall 3D model of the robot's environment. This approach turns the computer vision problem into a search problem, with the goal of constructing a physically realistic model of the entire environment. At each step, the vision system selects a point in the visual input to focus on. The distance, shape, texture and motion information are computed in a small region and used to build a mesh in a 3D virtual world. Background knowledge is used to extend this structure as appropriate, e.g. if a patch of wall is seen, it is hypothesized to be part of a large wall and the entire wall is created in the virtual world, or if part of an object is recognized, the whole object's mesh is retrieved from the library of objects and placed into the virtual world. The difference between the input from the real camera and from the virtual camera is compared using local Gaussians, creating an error mask that indicates the main differences between them. This is then used to select the next points to focus on. This approach permits us to use very expensive algorithms on small localities, thus generating very accurate models. It also is task-oriented, permitting the robot to use its knowledge about its task and goals to decide which parts of the environment need to be examined. The software components of this architecture include PhysX for the 3D virtual world, OpenCV and the Point Cloud Library for visual processing, and the Soar cognitive architecture, which controls the perceptual processing and robot planning. The hardware is a custom-built pan-tilt stereo color camera. We describe experiments using both static and moving objects.
Embedded image processing engine using ARM cortex-M4 based STM32F407 microcontroller
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samaiya, Devesh, E-mail: samaiya.devesh@gmail.com
2014-10-06
Due to advancement in low cost, easily available, yet powerful hardware and revolution in open source software, urge to make newer, more interactive machines and electronic systems have increased manifold among engineers. To make system more interactive, designers need easy to use sensor systems. Giving the boon of vision to machines was never easy, though it is not impossible these days; it is still not easy and expensive. This work presents a low cost, moderate performance and programmable Image processing engine. This Image processing engine is able to capture real time images, can store the images in the permanent storagemore » and can perform preprogrammed image processing operations on the captured images.« less
Video rate color region segmentation for mobile robotic applications
NASA Astrophysics Data System (ADS)
de Cabrol, Aymeric; Bonnin, Patrick J.; Hugel, Vincent; Blazevic, Pierre; Chetto, Maryline
2005-08-01
Color Region may be an interesting image feature to extract for visual tasks in robotics, such as navigation and obstacle avoidance. But, whereas numerous methods are used for vision systems embedded on robots, only a few use this segmentation mainly because of the processing duration. In this paper, we propose a new real-time (ie. video rate) color region segmentation followed by a robust color classification and a merging of regions, dedicated to various applications such as RoboCup four-legged league or an industrial conveyor wheeled robot. Performances of this algorithm and confrontation with other methods, in terms of result quality and temporal performances are provided. For better quality results, the obtained speed up is between 2 and 4. For same quality results, the it is up to 10. We present also the outlines of the Dynamic Vision System of the CLEOPATRE Project - for which this segmentation has been developed - and the Clear Box Methodology which allowed us to create the new color region segmentation from the evaluation and the knowledge of other well known segmentations.
Person and gesture tracking with smart stereo cameras
NASA Astrophysics Data System (ADS)
Gordon, Gaile; Chen, Xiangrong; Buck, Ron
2008-02-01
Physical security increasingly involves sophisticated, real-time visual tracking of a person's location inside a given environment, often in conjunction with biometrics and other security-related technologies. However, demanding real-world conditions like crowded rooms, changes in lighting and physical obstructions have proved incredibly challenging for 2D computer vision technology. In contrast, 3D imaging technology is not affected by constant changes in lighting and apparent color, and thus allows tracking accuracy to be maintained in dynamically lit environments. In addition, person tracking with a 3D stereo camera can provide the location and movement of each individual very precisely, even in a very crowded environment. 3D vision only requires that the subject be partially visible to a single stereo camera to be correctly tracked; multiple cameras are used to extend the system's operational footprint, and to contend with heavy occlusion. A successful person tracking system, must not only perform visual analysis robustly, but also be small, cheap and consume relatively little power. The TYZX Embedded 3D Vision systems are perfectly suited to provide the low power, small footprint, and low cost points required by these types of volume applications. Several security-focused organizations, including the U.S Government, have deployed TYZX 3D stereo vision systems in security applications. 3D image data is also advantageous in the related application area of gesture tracking. Visual (uninstrumented) tracking of natural hand gestures and movement provides new opportunities for interactive control including: video gaming, location based entertainment, and interactive displays. 2D images have been used to extract the location of hands within a plane, but 3D hand location enables a much broader range of interactive applications. In this paper, we provide some background on the TYZX smart stereo cameras platform, describe the person tracking and gesture tracking systems implemented on this platform, and discuss some deployed applications.
Markert, H; Kaufmann, U; Kara Kayikci, Z; Palm, G
2009-03-01
Language understanding is a long-standing problem in computer science. However, the human brain is capable of processing complex languages with seemingly no difficulties. This paper shows a model for language understanding using biologically plausible neural networks composed of associative memories. The model is able to deal with ambiguities on the single word and grammatical level. The language system is embedded into a robot in order to demonstrate the correct semantical understanding of the input sentences by letting the robot perform corresponding actions. For that purpose, a simple neural action planning system has been combined with neural networks for visual object recognition and visual attention control mechanisms.
Bhatlawande, Shripad; Mahadevappa, Manjunatha; Mukherjee, Jayanta; Biswas, Mukul; Das, Debabrata; Gupta, Somedeb
2014-11-01
This paper proposes a new electronic mobility cane (EMC) for providing obstacle detection and way-finding assistance to the visually impaired people. The main feature of this cane is that it constructs the logical map of the surrounding environment to deduce the priority information. It provides a simplified representation of the surrounding environment without causing any information overload. It conveys this priority information to the subject by using intuitive vibration, audio or voice feedback. The other novel features of the EMC are staircase detection and nonformal distance scaling scheme. It also provides information about the floor status. It consists of a low power embedded system with ultrasonic sensors and safety indicators. The EMC was subjected to series of clinical evaluations in order to verify its design and to assess its ability to assist the subjects in their daily-life mobility. Clinical evaluations were performed with 16 totally blind and four low vision subjects. All subjects walked controlled and the real-world test environments with the EMC and the traditional white cane. The evaluation results and significant scores of subjective measurements have shown the usefulness of the EMC in vision rehabilitation services.
Advanced flight computers for planetary exploration
NASA Technical Reports Server (NTRS)
Stephenson, R. Rhoads
1988-01-01
Research concerning flight computers for use on interplanetary probes is reviewed. The history of these computers from the Viking mission to the present is outlined. The differences between ground commercial computers and computers for planetary exploration are listed. The development of a computer for the Mariner Mark II comet rendezvous asteroid flyby mission is described. Various aspects of recently developed computer systems are examined, including the Max real time, embedded computer, a hypercube distributed supercomputer, a SAR data processor, a processor for the High Resolution IR Imaging Spectrometer, and a robotic vision multiresolution pyramid machine for processsing images obtained by a Mars Rover.
Ma, Jun; Lewis, Megan A; Smyth, Joshua M
2018-04-12
In this commentary, we propose a vision for "practice-based translational behavior change research," which we define as clinical and public health practice-embedded research on the implementation, optimization, and fundamental mechanisms of behavioral interventions. This vision intends to be inclusive of important research elements for behavioral intervention development, testing, and implementation. We discuss important research gaps and conceptual and methodological advances in three key areas along the discovery (development) to delivery (implementation) continuum of evidence-based interventions to improve behavior and health that could help achieve our vision of practice-based translational behavior change research. We expect our proposed vision to be refined and evolve over time. Through highlighting critical gaps that can be addressed by integrating modern theoretical and methodological approaches across disciplines in behavioral medicine, we hope to inspire the development and funding of innovative research on more potent and implementable behavior change interventions for optimal population and individual health.
Topological analysis of group fragmentation in multiagent systems
NASA Astrophysics Data System (ADS)
DeLellis, Pietro; Porfiri, Maurizio; Bollt, Erik M.
2013-02-01
In social animals, the presence of conflicts of interest or multiple leaders can promote the emergence of two or more subgroups. Such subgroups are easily recognizable by human observers, yet a quantitative and objective measure of group fragmentation is currently lacking. In this paper, we explore the feasibility of detecting group fragmentation by embedding the raw data from the individuals' motions on a low-dimensional manifold and analyzing the topological features of this manifold. To perform the embedding, we employ the isomap algorithm, which is a data-driven machine learning tool extensively used in computer vision. We implement this procedure on a data set generated by a modified à la Vicsek model, where agents are partitioned into two or more subsets and an independent leader is assigned to each subset. The dimensionality of the embedding manifold is shown to be a measure of the number of emerging subgroups in the selected observation window and a cluster analysis is proposed to aid the interpretation of these findings. To explore the feasibility of using this approach to characterize group fragmentation in real time and thus reduce the computational cost in data processing and storage, we propose an interpolation method based on an inverse mapping from the embedding space to the original space. The effectiveness of the interpolation technique is illustrated on a test-bed example with potential impact on the regulation of collective behavior of animal groups using robotic stimuli.
Oral Health Care Delivery Within the Accountable Care Organization.
Blue, Christine; Riggs, Sheila
2016-06-01
The accountable care organization (ACO) provides an opportunity to strategically design a comprehensive health system in which oral health works within primary care. A dental hygienist/therapist within the ACO represents value-based health care in action. Inspired by health care reform efforts in Minnesota, a vision of an accountable care organization that integrates oral health into primary health care was developed. Dental hygienists and dental therapists can help accelerate the integration of oral health into primary care, particularly in light of the compelling evidence confirming the cost-effectiveness of care delivered by an allied workforce. A dental insurance Chief Operating Officer and a dental hygiene educator used their unique perspectives and experience to describe the potential of an interdisciplinary team-based approach to individual and population health, including oral health, via an accountable care community. The principles of the patient-centered medical home and the vision for accountable care communities present a paradigm shift from a curative system of care to a prevention-based system that encompasses the behavioral, social, nutritional, economic, and environmental factors that impact health and well-being. Oral health measures embedded in the spectrum of general health care have the potential to ensure a truly comprehensive healthcare system. Published by Elsevier Inc.
Image Processing Occupancy Sensor
DOE Office of Scientific and Technical Information (OSTI.GOV)
The Image Processing Occupancy Sensor, or IPOS, is a novel sensor technology developed at the National Renewable Energy Laboratory (NREL). The sensor is based on low-cost embedded microprocessors widely used by the smartphone industry and leverages mature open-source computer vision software libraries. Compared to traditional passive infrared and ultrasonic-based motion sensors currently used for occupancy detection, IPOS has shown the potential for improved accuracy and a richer set of feedback signals for occupant-optimized lighting, daylighting, temperature setback, ventilation control, and other occupancy and location-based uses. Unlike traditional passive infrared (PIR) or ultrasonic occupancy sensors, which infer occupancy based only onmore » motion, IPOS uses digital image-based analysis to detect and classify various aspects of occupancy, including the presence of occupants regardless of motion, their number, location, and activity levels of occupants, as well as the illuminance properties of the monitored space. The IPOS software leverages the recent availability of low-cost embedded computing platforms, computer vision software libraries, and camera elements.« less
Visualization of the 3-D topography of the optic nerve head through a passive stereo vision model
NASA Astrophysics Data System (ADS)
Ramirez, Juan M.; Mitra, Sunanda; Morales, Jose
1999-01-01
This paper describes a system for surface recovery and visualization of the 3D topography of the optic nerve head, as support of early diagnosis and follow up to glaucoma. In stereo vision, depth information is obtained from triangulation of corresponding points in a pair of stereo images. In this paper, the use of the cepstrum transformation as a disparity measurement technique between corresponding windows of different block sizes is described. This measurement process is embedded within a coarse-to-fine depth-from-stereo algorithm, providing an initial range map with the depth information encoded as gray levels. These sparse depth data are processed through a cubic B-spline interpolation technique in order to obtain a smoother representation. This methodology is being especially refined to be used with medical images for clinical evaluation of some eye diseases such as open angle glaucoma, and is currently under testing for clinical evaluation and analysis of reproducibility and accuracy.
X-Eye: a novel wearable vision system
NASA Astrophysics Data System (ADS)
Wang, Yuan-Kai; Fan, Ching-Tang; Chen, Shao-Ang; Chen, Hou-Ye
2011-03-01
This paper proposes a smart portable device, named the X-Eye, which provides a gesture interface with a small size but a large display for the application of photo capture and management. The wearable vision system is implemented with embedded systems and can achieve real-time performance. The hardware of the system includes an asymmetric dualcore processer with an ARM core and a DSP core. The display device is a pico projector which has a small volume size but can project large screen size. A triple buffering mechanism is designed for efficient memory management. Software functions are partitioned and pipelined for effective execution in parallel. The gesture recognition is achieved first by a color classification which is based on the expectation-maximization algorithm and Gaussian mixture model (GMM). To improve the performance of the GMM, we devise a LUT (Look Up Table) technique. Fingertips are extracted and geometrical features of fingertip's shape are matched to recognize user's gesture commands finally. In order to verify the accuracy of the gesture recognition module, experiments are conducted in eight scenes with 400 test videos including the challenge of colorful background, low illumination, and flickering. The processing speed of the whole system including the gesture recognition is with the frame rate of 22.9FPS. Experimental results give 99% recognition rate. The experimental results demonstrate that this small-size large-screen wearable system has effective gesture interface with real-time performance.
We Canwatch It For You Wholesale
NASA Astrophysics Data System (ADS)
Lipton, Alan J.
This chapter provides an introduction to video analytics—a branch of computer vision technology that deals with automatic detection of activities and events in surveillance video feeds. Initial applications focused on the security and surveillance space, but as the technology improves it is rapidly finding a home in many other application areas. This chapter looks at some of those spaces, the requirements they impose on video analytics systems, and provides an example architecture and set of technology components to meet those requirements. This exemplary system is put through its paces to see how it stacks up in an embedded environment. Finally, we explore the future of video analytics and examine some of the market requirements that are driving breakthroughs in both video analytics and processor platform technology alike.
Real-time high-level video understanding using data warehouse
NASA Astrophysics Data System (ADS)
Lienard, Bruno; Desurmont, Xavier; Barrie, Bertrand; Delaigle, Jean-Francois
2006-02-01
High-level Video content analysis such as video-surveillance is often limited by computational aspects of automatic image understanding, i.e. it requires huge computing resources for reasoning processes like categorization and huge amount of data to represent knowledge of objects, scenarios and other models. This article explains how to design and develop a "near real-time adaptive image datamart", used, as a decisional support system for vision algorithms, and then as a mass storage system. Using RDF specification as storing format of vision algorithms meta-data, we can optimise the data warehouse concepts for video analysis, add some processes able to adapt the current model and pre-process data to speed-up queries. In this way, when new data is sent from a sensor to the data warehouse for long term storage, using remote procedure call embedded in object-oriented interfaces to simplified queries, they are processed and in memory data-model is updated. After some processing, possible interpretations of this data can be returned back to the sensor. To demonstrate this new approach, we will present typical scenarios applied to this architecture such as people tracking and events detection in a multi-camera network. Finally we will show how this system becomes a high-semantic data container for external data-mining.
NASA Astrophysics Data System (ADS)
Megherbi, Dalila B.; Yan, Yin; Tanmay, Parikh; Khoury, Jed; Woods, C. L.
2004-11-01
Recently surveillance and Automatic Target Recognition (ATR) applications are increasing as the cost of computing power needed to process the massive amount of information continues to fall. This computing power has been made possible partly by the latest advances in FPGAs and SOPCs. In particular, to design and implement state-of-the-Art electro-optical imaging systems to provide advanced surveillance capabilities, there is a need to integrate several technologies (e.g. telescope, precise optics, cameras, image/compute vision algorithms, which can be geographically distributed or sharing distributed resources) into a programmable system and DSP systems. Additionally, pattern recognition techniques and fast information retrieval, are often important components of intelligent systems. The aim of this work is using embedded FPGA as a fast, configurable and synthesizable search engine in fast image pattern recognition/retrieval in a distributed hardware/software co-design environment. In particular, we propose and show a low cost Content Addressable Memory (CAM)-based distributed embedded FPGA hardware architecture solution with real time recognition capabilities and computing for pattern look-up, pattern recognition, and image retrieval. We show how the distributed CAM-based architecture offers a performance advantage of an order-of-magnitude over RAM-based architecture (Random Access Memory) search for implementing high speed pattern recognition for image retrieval. The methods of designing, implementing, and analyzing the proposed CAM based embedded architecture are described here. Other SOPC solutions/design issues are covered. Finally, experimental results, hardware verification, and performance evaluations using both the Xilinx Virtex-II and the Altera Apex20k are provided to show the potential and power of the proposed method for low cost reconfigurable fast image pattern recognition/retrieval at the hardware/software co-design level.
Soga, Kenichi; Schooling, Jennifer
2016-08-06
Design, construction, maintenance and upgrading of civil engineering infrastructure requires fresh thinking to minimize use of materials, energy and labour. This can only be achieved by understanding the performance of the infrastructure, both during its construction and throughout its design life, through innovative monitoring. Advances in sensor systems offer intriguing possibilities to radically alter methods of condition assessment and monitoring of infrastructure. In this paper, it is hypothesized that the future of infrastructure relies on smarter information; the rich information obtained from embedded sensors within infrastructure will act as a catalyst for new design, construction, operation and maintenance processes for integrated infrastructure systems linked directly with user behaviour patterns. Some examples of emerging sensor technologies for infrastructure sensing are given. They include distributed fibre-optics sensors, computer vision, wireless sensor networks, low-power micro-electromechanical systems, energy harvesting and citizens as sensors.
Soga, Kenichi; Schooling, Jennifer
2016-01-01
Design, construction, maintenance and upgrading of civil engineering infrastructure requires fresh thinking to minimize use of materials, energy and labour. This can only be achieved by understanding the performance of the infrastructure, both during its construction and throughout its design life, through innovative monitoring. Advances in sensor systems offer intriguing possibilities to radically alter methods of condition assessment and monitoring of infrastructure. In this paper, it is hypothesized that the future of infrastructure relies on smarter information; the rich information obtained from embedded sensors within infrastructure will act as a catalyst for new design, construction, operation and maintenance processes for integrated infrastructure systems linked directly with user behaviour patterns. Some examples of emerging sensor technologies for infrastructure sensing are given. They include distributed fibre-optics sensors, computer vision, wireless sensor networks, low-power micro-electromechanical systems, energy harvesting and citizens as sensors. PMID:27499845
The role of vision on hand preshaping during reach to grasp.
Winges, Sara A; Weber, Douglas J; Santello, Marco
2003-10-01
During reaching to grasp objects with different shapes hand posture is molded gradually to the object's contours. The present study examined the extent to which the temporal evolution of hand posture depends on continuous visual feedback. We asked subjects to reach and grasp objects with different shapes under five vision conditions (VCs). Subjects wore liquid crystal spectacles that occluded vision at four different latencies from onset of the reach. As a control, full-vision trials (VC5) were interspersed among the blocked vision trials. Object shapes and all VCs were presented to the subjects in random order. Hand posture was measured by 15 sensors embedded in a glove. Linear regression analysis, discriminant analysis, and information theory were used to assess the effect of removing vision on the temporal evolution of hand shape. We found that reach duration increased when vision was occluded early in the reach. This was caused primarily by a slower approach of the hand toward the object near the end of the reach. However, vision condition did not have a significant effect on the covariation patterns of joint rotations, indicating that the gradual evolution of hand posture occurs in a similar fashion regardless of vision. Discriminant analysis further supported this interpretation, as the extent to which hand posture resembled object shape and the rate at which hand posture discrimination occurred throughout the movement were similar across vision conditions. These results extend previous observations on memory-guided reaches by showing that continuous visual feedback of the hand and/or object is not necessary to allow the hand to gradually conform to object contours.
Jung and the Soul of Education (at the "Crunch")
ERIC Educational Resources Information Center
Rowland, Susan
2012-01-01
C. G. Jung offers education a unique perspective of the dilemma of collective social demands versus individual needs. Indeed, so radical and profound is his vision of the learning psyche as collectively embedded, that it addresses the current crisis over the demand for utilitarian higher education. Hence post-Jungian educationalists can develop…
Setting Sail for that Country: The Utopian Urge behind Inclusion
ERIC Educational Resources Information Center
McMaster, Christopher
2013-01-01
The vision for the future embedded in inclusive values has fuelled educational reform. This paper will explore the utopian drive behind inclusion. The contributions of thinkers as diverse as John Dewey, Antonio Gramsci, and Paulo Freire give impetus to efforts to create a better tomorrow. They, and those who have previously struggled for…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-22
... Committee 213, Enhanced Flight Visions Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Visions Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing... Flight Visions Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held April 17-19...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-25
... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing..., Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-12
... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing... Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held October 2-4...
Multi-baseline bootstrapping at the Navy precision optical interferometer
NASA Astrophysics Data System (ADS)
Armstrong, J. T.; Schmitt, H. R.; Mozurkewich, D.; Jorgensen, A. M.; Muterspaugh, M. W.; Baines, E. K.; Benson, J. A.; Zavala, Robert T.; Hutter, D. J.
2014-07-01
The Navy Precision Optical Interferometer (NPOI) was designed from the beginning to support baseline boot- strapping with equally-spaced array elements. The motivation was the desire to image the surfaces of resolved stars with the maximum resolution possible with a six-element array. Bootstrapping two baselines together to track fringes on a third baseline has been used at the NPOI for many years, but the capabilities of the fringe tracking software did not permit us to bootstrap three or more baselines together. Recently, both a new backend (VISION; Tennessee State Univ.) and new hardware and firmware (AZ Embedded Systems and New Mexico Tech, respectively) for the current hybrid backend have made multi-baseline bootstrapping possible.
Technologies for Achieving Field Ubiquitous Computing
NASA Astrophysics Data System (ADS)
Nagashima, Akira
Although the term “ubiquitous” may sound like jargon used in information appliances, ubiquitous computing is an emerging concept in industrial automation. This paper presents the author's visions of field ubiquitous computing, which is based on the novel Internet Protocol IPv6. IPv6-based instrumentation will realize the next generation manufacturing excellence. This paper focuses on the following five key issues: 1. IPv6 standardization; 2. IPv6 interfaces embedded in field devices; 3. Compatibility with FOUNDATION fieldbus; 4. Network securities for field applications; and 5. Wireless technologies to complement IP instrumentation. Furthermore, the principles of digital plant operations and ubiquitous production to support the above key technologies to achieve field ubiquitous systems are discussed.
Cognitive vision system for control of dexterous prosthetic hands: Experimental evaluation
2010-01-01
Background Dexterous prosthetic hands that were developed recently, such as SmartHand and i-LIMB, are highly sophisticated; they have individually controllable fingers and the thumb that is able to abduct/adduct. This flexibility allows implementation of many different grasping strategies, but also requires new control algorithms that can exploit the many degrees of freedom available. The current study presents and tests the operation of a new control method for dexterous prosthetic hands. Methods The central component of the proposed method is an autonomous controller comprising a vision system with rule-based reasoning mounted on a dexterous hand (CyberHand). The controller, termed cognitive vision system (CVS), mimics biological control and generates commands for prehension. The CVS was integrated into a hierarchical control structure: 1) the user triggers the system and controls the orientation of the hand; 2) a high-level controller automatically selects the grasp type and size; and 3) an embedded hand controller implements the selected grasp using closed-loop position/force control. The operation of the control system was tested in 13 healthy subjects who used Cyberhand, attached to the forearm, to grasp and transport 18 objects placed at two different distances. Results The system correctly estimated grasp type and size (nine commands in total) in about 84% of the trials. In an additional 6% of the trials, the grasp type and/or size were different from the optimal ones, but they were still good enough for the grasp to be successful. If the control task was simplified by decreasing the number of possible commands, the classification accuracy increased (e.g., 93% for guessing the grasp type only). Conclusions The original outcome of this research is a novel controller empowered by vision and reasoning and capable of high-level analysis (i.e., determining object properties) and autonomous decision making (i.e., selecting the grasp type and size). The automatic control eases the burden from the user and, as a result, the user can concentrate on what he/she does, not on how he/she should do it. The tests showed that the performance of the controller was satisfactory and that the users were able to operate the system with minimal prior training. PMID:20731834
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-18
... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing..., Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held April...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-11
... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing..., Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held October...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-05
... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY...-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing...: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held April...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-28
... Committee 213: EUROCAE WG- 79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY... Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS... 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-22
... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY...: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will...
Exploring Principal Capacity to Lead Reform of Teaching and Learning Quality in Thailand
ERIC Educational Resources Information Center
Hallinger, Philip; Lee, Moosung
2013-01-01
In 1999 Thailand passed an ambitious national educational law that paved the way for major reforms in teaching, learning and school management. Despite the ambitious vision of reform embedded in this law, recent studies suggest that implementation progress has been slow, uneven, and lacking deep penetration onto classrooms. Carried out ten years…
Coaching as Inquiry: The South Carolina Reading Initiative
ERIC Educational Resources Information Center
Stephens, Diane; Mills, Heidi
2014-01-01
Embedded within traditional notions of coaching are unstated expectations that (a) the coach is an expert and knows what it is that the other person should be doing and (b) based on his or her expertise, the coach should take actions to achieve his or her vision for the other person. Within the South Carolina Reading Initiative, however, literacy…
The Humanities and the Art of Public Discussion. Volume 2.
ERIC Educational Resources Information Center
Federation of State Humanities Councils, Washington, DC.
The marriage of the humanities to public discussion of major current issues is an invitation to understand how various points of view are embedded in one's history, values, visions of the future, and an understanding of what is right, wrong, and necessary. The essays in this volume examine three issues: abortion, economic competition, and racial…
Meet Me at the Crossroads: Over-Fishing to Meet the Standards
ERIC Educational Resources Information Center
Donovan, John E., II
2008-01-01
To achieve the vision of mathematics set forth in "Crossroads" ("AMATYC," 1995), students must experience mathematics as a sensemaking endeavor that informs their world. Embedding the study of mathematics into the real world is a challenge, particularly because it was not the way that many of us learned mathematics in the first place. This article…
A Vision for the Net Generation Media Center. Media Matters
ERIC Educational Resources Information Center
Johnson, Doug
2005-01-01
Many children today have never lived in a home without a computer. They are the "Net Generation," constantly "connected" by iPod, cell phone, keyboard, digital video camera, or game controller to various technologies. Recent studies have found that Net Genners see technology as "embedded in society," a primary means of connection with friends, and…
Navigation studies based on the ubiquitous positioning technologies
NASA Astrophysics Data System (ADS)
Ye, Lei; Mi, Weijie; Wang, Defeng
2007-11-01
This paper summarized the nowadays positioning technologies, such as absolute positioning methods and relative positioning methods, indoor positioning and outdoor positioning, active positioning and passive positioning. Global Navigation Satellite System (GNSS) technologies were introduced as the omnipresent out-door positioning technologies, including GPS, GLONASS, Galileo and BD-1/2. After analysis of the shortcomings of GNSS, indoor positioning technologies were discussed and compared, including A-GPS, Cellular network, Infrared, Electromagnetism, Computer Vision Cognition, Embedded Pressure Sensor, Ultrasonic, RFID (Radio Frequency IDentification), Bluetooth, WLAN etc.. Then the concept and characteristics of Ubiquitous Positioning was proposed. After the ubiquitous positioning technologies contrast and selection followed by system engineering methodology, a navigation system model based on Incorporate Indoor-Outdoor Positioning Solution was proposed. And this model was simulated in the Galileo Demonstration for World Expo Shanghai project. In the conclusion, the prospects of ubiquitous positioning based navigation were shown, especially to satisfy the public location information acquiring requirement.
Compact VLSI neural computer integrated with active pixel sensor for real-time ATR applications
NASA Astrophysics Data System (ADS)
Fang, Wai-Chi; Udomkesmalee, Gabriel; Alkalai, Leon
1997-04-01
A compact VLSI neural computer integrated with an active pixel sensor has been under development to mimic what is inherent in biological vision systems. This electronic eye- brain computer is targeted for real-time machine vision applications which require both high-bandwidth communication and high-performance computing for data sensing, synergy of multiple types of sensory information, feature extraction, target detection, target recognition, and control functions. The neural computer is based on a composite structure which combines Annealing Cellular Neural Network (ACNN) and Hierarchical Self-Organization Neural Network (HSONN). The ACNN architecture is a programmable and scalable multi- dimensional array of annealing neurons which are locally connected with their local neurons. Meanwhile, the HSONN adopts a hierarchical structure with nonlinear basis functions. The ACNN+HSONN neural computer is effectively designed to perform programmable functions for machine vision processing in all levels with its embedded host processor. It provides a two order-of-magnitude increase in computation power over the state-of-the-art microcomputer and DSP microelectronics. A compact current-mode VLSI design feasibility of the ACNN+HSONN neural computer is demonstrated by a 3D 16X8X9-cube neural processor chip design in a 2-micrometers CMOS technology. Integration of this neural computer as one slice of a 4'X4' multichip module into the 3D MCM based avionics architecture for NASA's New Millennium Program is also described.
[Studies of vision by Leonardo da Vinci].
Berggren, L
2001-01-01
Leonardo was an advocate of the intromission theory of vision. Light rays from the object to the eye caused visual perceptions which were transported to the brain ventricles via a hollow optic nerve. Leonardo introduced wax injections to explore the ventricular system. Perceptions were assumed to go to the "senso comune" in the middle (3rd) ventricle, also the seat of the soul. The processing station "imprensiva" in the anterior lateral horns together with memory "memoria" in th posterior (4th) ventricle integrated the visual perceptions to visual experience. - Leonardo's sketches with circular lenses in the center of the eye reveal that his dependence on medieval optics prevailed over anatomical observations. Drawings of the anatomy of the sectioned eye are missing although Leonardo had invented a new embedding technique. In order to dissect the eye without spilling its contents, the eye was first boiled in egg white and then cut. The procedure was now repeated and showed that the ovoid lens after boiling had become spherical. - Leonardo described that light rays were refracted and reflected in the eye but his imperfect anatomy prevented a development of physiological optics. He was, however, the first to compare the eye with a pin-hole camera (camera obscura). Leonardo's drawings of the inverted pictures on the back wall of a camera obscura inspired to its use as an instrument for artistic practice. The camera obscura was for centuries a model for explaining human vision.
The Environmental Production of Disability for Seniors with Age-Related Vision Loss.
McGrath, Colleen; Laliberte Rudman, Debbie; Spafford, Marlee; Trentham, Barry; Polgar, Jan
2017-03-01
To date, attention to the environmental production of disability among older adults with age-related vision loss (ARVL) has been limited. This critical ethnographic study aimed to reveal the ways in which environmental barriers produced and perpetuated disability for 10 older adults with ARVL. A modified version of Carspecken's five-stage approach for critical ethnography was adopted with three methods of data collection used, including a narrative interview, a participant observation session, and a semi-structured, in-depth interview. Findings revealed how disability is shaped for older adults with ARVL when they encounter environmental features that are embedded within an ageist and disablist society. These findings are illustrated via presenting analysis of three commonly discussed activities: shopping, eating, and community mobility. Our discussion suggests that addressing the environmental production of disability requires inclusive social policy, advocacy, and a focus on education in order to develop and sustain age and low-vision-friendly environments.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-03
... Special Committee 213: EUROCAE WG- 79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS... Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems... Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-12
... Special Committee 213: EUROCAE WG- 79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS... Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems... Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS...
Motion and Emotional Behavior Design for Pet Robot Dog
NASA Astrophysics Data System (ADS)
Cheng, Chi-Tai; Yang, Yu-Ting; Miao, Shih-Heng; Wong, Ching-Chang
A pet robot dog with two ears, one mouth, one facial expression plane, and one vision system is designed and implemented so that it can do some emotional behaviors. Three processors (Inter® Pentium® M 1.0 GHz, an 8-bit processer 8051, and embedded soft-core processer NIOS) are used to control the robot. One camera, one power detector, four touch sensors, and one temperature detector are used to obtain the information of the environment. The designed robot with 20 DOF (degrees of freedom) is able to accomplish the walking motion. A behavior system is built on the implemented pet robot so that it is able to choose a suitable behavior for different environmental situation. From the practical test, we can see that the implemented pet robot dog can do some emotional interaction with the human.
Biologically inspired collision avoidance system for unmanned vehicles
NASA Astrophysics Data System (ADS)
Ortiz, Fernando E.; Graham, Brett; Spagnoli, Kyle; Kelmelis, Eric J.
2009-05-01
In this project, we collaborate with researchers in the neuroscience department at the University of Delaware to develop an Field Programmable Gate Array (FPGA)-based embedded computer, inspired by the brains of small vertebrates (fish). The mechanisms of object detection and avoidance in fish have been extensively studied by our Delaware collaborators. The midbrain optic tectum is a biological multimodal navigation controller capable of processing input from all senses that convey spatial information, including vision, audition, touch, and lateral-line (water current sensing in fish). Unfortunately, computational complexity makes these models too slow for use in real-time applications. These simulations are run offline on state-of-the-art desktop computers, presenting a gap between the application and the target platform: a low-power embedded device. EM Photonics has expertise in developing of high-performance computers based on commodity platforms such as graphic cards (GPUs) and FPGAs. FPGAs offer (1) high computational power, low power consumption and small footprint (in line with typical autonomous vehicle constraints), and (2) the ability to implement massively-parallel computational architectures, which can be leveraged to closely emulate biological systems. Combining UD's brain modeling algorithms and the power of FPGAs, this computer enables autonomous navigation in complex environments, and further types of onboard neural processing in future applications.
Computational Unification: a Vision for Connecting Researchers
NASA Astrophysics Data System (ADS)
Troy, R. M.; Kingrey, O. J.
2002-12-01
Computational Unification of science, once only a vision, is becoming a reality. This technology is based upon a scientifically defensible, general solution for Earth Science data management and processing. The computational unification of science offers a real opportunity to foster inter and intra-discipline cooperation, and the end of 're-inventing the wheel'. As we move forward using computers as tools, it is past time to move from computationally isolating, "one-off" or discipline-specific solutions into a unified framework where research can be more easily shared, especially with researchers in other disciplines. The author will discuss how distributed meta-data, distributed processing and distributed data objects are structured to constitute a working interdisciplinary system, including how these resources lead to scientific defensibility through known lineage of all data products. Illustration of how scientific processes are encapsulated and executed illuminates how previously written processes and functions are integrated into the system efficiently and with minimal effort. Meta-data basics will illustrate how intricate relationships may easily be represented and used to good advantage. Retrieval techniques will be discussed including trade-offs of using meta-data versus embedded data, how the two may be integrated, and how simplifying assumptions may or may not help. This system is based upon the experience of the Sequoia 2000 and BigSur research projects at the University of California, Berkeley, whose goals were to find an alternative to the Hughes EOS-DIS system and is presently offered by Science Tools corporation, of which the author is a principal.
ERIC Educational Resources Information Center
Zappardino, Pamela
Stephen Jay Gould points out in "The Mismeasure of Man" (1981), "Science, since people must do it, is a socially embedded activity. It progresses by hunch, vision, and intuition." The legacy of the traditional construct of intelligence and its measurement through intelligence quotient (IQ) tests has not been educational improvement. Its legacy in…
Robotic Design Studio: Exploring the Big Ideas of Engineering in a Liberal Arts Environment.
ERIC Educational Resources Information Center
Turbak, Franklyn; Berg, Robbie
2002-01-01
Suggests that it is important to introduce liberal arts students to the essence of engineering. Describes Robotic Design Studio, a course in which students learn how to design, assemble, and program robots made out of LEGO parts, sensors, motors, and small embedded computers. Represents an alternative vision of how robot design can be used to…
ERIC Educational Resources Information Center
Warr Pedersen, Kristin
2017-01-01
Purpose: The purpose of this paper is to consider an expanded vision of professional development for embedding education for sustainability (EfS) in a higher education institution. Through an exploration of a community of practice at the University of Tasmania, this paper examines how collaborative peer learning can sustain and promote continued…
Organizational Context of Human Factors
1982-11-01
anthroprometric characteristics of humans (reach, strength, etc.); biological limits of vision, hearing, memory; and work-load issues . This paper is in...developments of the .7A organizational analysis field was the generalization that authoritarian structures led to low morale among personnel, and this led to...emphasized participation in decision making, and freedom to criticize practices, would lead to high morale and more output. Embedded in this
Low-complexity object detection with deep convolutional neural network for embedded systems
NASA Astrophysics Data System (ADS)
Tripathi, Subarna; Kang, Byeongkeun; Dane, Gokce; Nguyen, Truong
2017-09-01
We investigate low-complexity convolutional neural networks (CNNs) for object detection for embedded vision applications. It is well-known that consolidation of an embedded system for CNN-based object detection is more challenging due to computation and memory requirement comparing with problems like image classification. To achieve these requirements, we design and develop an end-to-end TensorFlow (TF)-based fully-convolutional deep neural network for generic object detection task inspired by one of the fastest framework, YOLO.1 The proposed network predicts the localization of every object by regressing the coordinates of the corresponding bounding box as in YOLO. Hence, the network is able to detect any objects without any limitations in the size of the objects. However, unlike YOLO, all the layers in the proposed network is fully-convolutional. Thus, it is able to take input images of any size. We pick face detection as an use case. We evaluate the proposed model for face detection on FDDB dataset and Widerface dataset. As another use case of generic object detection, we evaluate its performance on PASCAL VOC dataset. The experimental results demonstrate that the proposed network can predict object instances of different sizes and poses in a single frame. Moreover, the results show that the proposed method achieves comparative accuracy comparing with the state-of-the-art CNN-based object detection methods while reducing the model size by 3× and memory-BW by 3 - 4× comparing with one of the best real-time CNN-based object detectors, YOLO. Our 8-bit fixed-point TF-model provides additional 4× memory reduction while keeping the accuracy nearly as good as the floating-point model. Moreover, the fixed- point model is capable of achieving 20× faster inference speed comparing with the floating-point model. Thus, the proposed method is promising for embedded implementations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheung, Y; Sawant, A
Purpose: Most clinically-deployed strategies for respiratory motion management in lung radiotherapy (e.g., gating, tracking) use external markers that serve as surrogates for tumor motion. However, typical lung phantoms used to validate these strategies are rigid-exterior+rigid-interior or rigid-exterior+deformable-interior. Neither class adequately represents the human anatomy, which is deformable internally as well as externally. We describe the construction and experimental validation of a more realistic, externally- and internally-deformable, programmable lung phantom. Methods: The outer shell of a commercially-available lung phantom (RS- 1500, RSD Inc.) was used. The shell consists of a chest cavity with a flexible anterior surface, and embedded vertebrae, rib-cagemore » and sternum. A 3-axis platform was programmed with sinusoidal and six patient-recorded lung tumor trajectories. The platform was used to drive a rigid foam ‘diaphragm’ that compressed/decompressed the phantom interior. Experimental characterization comprised of mapping the superior-inferior (SI) and anterior-posterior (AP) trajectories of external and internal radioopaque markers with kV x-ray fluoroscopy and correlating these with optical surface monitoring using the in-room VisionRT system. Results: The phantom correctly reproduced the programmed motion as well as realistic effects such as hysteresis. The reproducibility of marker trajectories over multiple runs for sinusoidal as well as patient traces, as characterized by fluoroscopy, was within 0.4 mm RMS error for internal as well as external markers. The motion trajectories of internal and external markers as measured by fluoroscopy were found to be highly correlated (R=0.97). Furthermore, motion trajectories of arbitrary points on the deforming phantom surface, as recorded by the VisionRT system also showed a high correlation with respect to the fluoroscopically-measured trajectories of internal markers (R=0.92). Conclusion: We have developed a realistic externally- and internally-deformable lung phantom that will serve as a valuable tool for clinical QA and motion management research. This work was supported through funding from the NIH and VisionRT Ltd. Amit Sawant has research funding from Varian Medical Systems, VisionRT and Elekta.« less
Embedded Vision Sensor Network for Planogram Maintenance in Retail Environments.
Frontoni, Emanuele; Mancini, Adriano; Zingaretti, Primo
2015-08-27
A planogram is a detailed visual map that establishes the position of the products in a retail store. It is designed to supply the best location of a product for suppliers to support an innovative merchandising approach, to increase sales and profits and to better manage the shelves. Deviating from the planogram defeats the purpose of any of these goals, and maintaining the integrity of the planogram becomes a fundamental aspect in retail operations. We propose an embedded system, mainly based on a smart camera, able to detect and to investigate the most important parameters in a retail store by identifying the differences with respect to an "approved" planogram. We propose a new solution that allows concentrating all the surveys and the useful measures on a limited number of devices in communication among them. These devices are simple, low cost and ready for immediate installation, providing an affordable and scalable solution to the problem of planogram maintenance. Moreover, over an Internet of Things (IoT) cloud-based architecture, the system supplies many additional data that are not concerning the planogram, e.g., out-of-shelf events, promptly notified through SMS and/or mail. The application of this project allows the realization of highly integrated systems, which are economical, complete and easy to use for a large number of users. Experimental results have proven that the system can efficiently calculate the deviation from a normal situation by comparing the base planogram image with the images grabbed.
Embedded Vision Sensor Network for Planogram Maintenance in Retail Environments
Frontoni, Emanuele; Mancini, Adriano; Zingaretti, Primo
2015-01-01
A planogram is a detailed visual map that establishes the position of the products in a retail store. It is designed to supply the best location of a product for suppliers to support an innovative merchandising approach, to increase sales and profits and to better manage the shelves. Deviating from the planogram defeats the purpose of any of these goals, and maintaining the integrity of the planogram becomes a fundamental aspect in retail operations. We propose an embedded system, mainly based on a smart camera, able to detect and to investigate the most important parameters in a retail store by identifying the differences with respect to an “approved” planogram. We propose a new solution that allows concentrating all the surveys and the useful measures on a limited number of devices in communication among them. These devices are simple, low cost and ready for immediate installation, providing an affordable and scalable solution to the problem of planogram maintenance. Moreover, over an Internet of Things (IoT) cloud-based architecture, the system supplies many additional data that are not concerning the planogram, e.g., out-of-shelf events, promptly notified through SMS and/or mail. The application of this project allows the realization of highly integrated systems, which are economical, complete and easy to use for a large number of users. Experimental results have proven that the system can efficiently calculate the deviation from a normal situation by comparing the base planogram image with the images grabbed. PMID:26343659
Feature-based component model for design of embedded systems
NASA Astrophysics Data System (ADS)
Zha, Xuan Fang; Sriram, Ram D.
2004-11-01
An embedded system is a hybrid of hardware and software, which combines software's flexibility and hardware real-time performance. Embedded systems can be considered as assemblies of hardware and software components. An Open Embedded System Model (OESM) is currently being developed at NIST to provide a standard representation and exchange protocol for embedded systems and system-level design, simulation, and testing information. This paper proposes an approach to representing an embedded system feature-based model in OESM, i.e., Open Embedded System Feature Model (OESFM), addressing models of embedded system artifacts, embedded system components, embedded system features, and embedded system configuration/assembly. The approach provides an object-oriented UML (Unified Modeling Language) representation for the embedded system feature model and defines an extension to the NIST Core Product Model. The model provides a feature-based component framework allowing the designer to develop a virtual embedded system prototype through assembling virtual components. The framework not only provides a formal precise model of the embedded system prototype but also offers the possibility of designing variation of prototypes whose members are derived by changing certain virtual components with different features. A case study example is discussed to illustrate the embedded system model.
Kimmel, Daniel L.; Mammo, Dagem; Newsome, William T.
2012-01-01
From human perception to primate neurophysiology, monitoring eye position is critical to the study of vision, attention, oculomotor control, and behavior. Two principal techniques for the precise measurement of eye position—the long-standing sclera-embedded search coil and more recent optical tracking techniques—are in use in various laboratories, but no published study compares the performance of the two methods simultaneously in the same primates. Here we compare two popular systems—a sclera-embedded search coil from C-N-C Engineering and the EyeLink 1000 optical system from SR Research—by recording simultaneously from the same eye in the macaque monkey while the animal performed a simple oculomotor task. We found broad agreement between the two systems, particularly in positional accuracy during fixation, measurement of saccade amplitude, detection of fixational saccades, and sensitivity to subtle changes in eye position from trial to trial. Nonetheless, certain discrepancies persist, particularly elevated saccade peak velocities, post-saccadic ringing, influence of luminance change on reported position, and greater sample-to-sample variation in the optical system. Our study shows that optical performance now rivals that of the search coil, rendering optical systems appropriate for many if not most applications. This finding is consequential, especially for animal subjects, because the optical systems do not require invasive surgery for implantation and repair of search coils around the eye. Our data also allow laboratories using the optical system in human subjects to assess the strengths and limitations of the technique for their own applications. PMID:22912608
Post-vision and change: do we know how to change?
D'Avanzo, Charlene
2013-01-01
The scale and importance of Vision and Change in Undergraduate Biology Education: A Call to Action challenges us to ask fundamental questions about widespread transformation of college biology instruction. I propose that we have clarified the "vision" but lack research-based models and evidence needed to guide the "change." To support this claim, I focus on several key topics, including evidence about effective use of active-teaching pedagogy by typical faculty and whether certain programs improve students' understanding of the Vision and Change core concepts. Program evaluation is especially problematic. While current education research and theory should inform evaluation, several prominent biology faculty-development programs continue to rely on self-reporting by faculty and students. Science, technology, engineering, and mathematics (STEM) faculty-development overviews can guide program design. Such studies highlight viewing faculty members as collaborators, embedding rewards faculty value, and characteristics of effective faculty-development learning communities. A recent National Research Council report on discipline-based STEM education research emphasizes the need for long-term faculty development and deep conceptual change in teaching and learning as the basis for genuine transformation of college instruction. Despite the progress evident in Vision and Change, forward momentum will likely be limited, because we lack evidence-based, reliable models for actually realizing the desired "change."
Lunar Applications in Reconfigurable Computing
NASA Technical Reports Server (NTRS)
Somervill, Kevin
2008-01-01
NASA s Constellation Program is developing a lunar surface outpost in which reconfigurable computing will play a significant role. Reconfigurable systems provide a number of benefits over conventional software-based implementations including performance and power efficiency, while the use of standardized reconfigurable hardware provides opportunities to reduce logistical overhead. The current vision for the lunar surface architecture includes habitation, mobility, and communications systems, each of which greatly benefit from reconfigurable hardware in applications including video processing, natural feature recognition, data formatting, IP offload processing, and embedded control systems. In deploying reprogrammable hardware, considerations similar to those of software systems must be managed. There needs to be a mechanism for discovery enabling applications to locate and utilize the available resources. Also, application interfaces are needed to provide for both configuring the resources as well as transferring data between the application and the reconfigurable hardware. Each of these topics are explored in the context of deploying reconfigurable resources as an integral aspect of the lunar exploration architecture.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-17
... Committee 213, Enhanced Flight Vision/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation..., Enhanced Flight Vision/ Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing this notice to advise the public of the seventeenth meeting of RTCA Special Committee 213, Enhanced Flight Vision...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-06
... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY...-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing...: Enhanced Flight [[Page 38864
Rao, Jinmeng; Qiao, Yanjun; Ren, Fu; Wang, Junxing; Du, Qingyun
2017-01-01
The purpose of this study was to develop a robust, fast and markerless mobile augmented reality method for registration, geovisualization and interaction in uncontrolled outdoor environments. We propose a lightweight deep-learning-based object detection approach for mobile or embedded devices; the vision-based detection results of this approach are combined with spatial relationships by means of the host device’s built-in Global Positioning System receiver, Inertial Measurement Unit and magnetometer. Virtual objects generated based on geospatial information are precisely registered in the real world, and an interaction method based on touch gestures is implemented. The entire method is independent of the network to ensure robustness to poor signal conditions. A prototype system was developed and tested on the Wuhan University campus to evaluate the method and validate its results. The findings demonstrate that our method achieves a high detection accuracy, stable geovisualization results and interaction. PMID:28837096
Rao, Jinmeng; Qiao, Yanjun; Ren, Fu; Wang, Junxing; Du, Qingyun
2017-08-24
The purpose of this study was to develop a robust, fast and markerless mobile augmented reality method for registration, geovisualization and interaction in uncontrolled outdoor environments. We propose a lightweight deep-learning-based object detection approach for mobile or embedded devices; the vision-based detection results of this approach are combined with spatial relationships by means of the host device's built-in Global Positioning System receiver, Inertial Measurement Unit and magnetometer. Virtual objects generated based on geospatial information are precisely registered in the real world, and an interaction method based on touch gestures is implemented. The entire method is independent of the network to ensure robustness to poor signal conditions. A prototype system was developed and tested on the Wuhan University campus to evaluate the method and validate its results. The findings demonstrate that our method achieves a high detection accuracy, stable geovisualization results and interaction.
Bio-inspired vision based robot control using featureless estimations of time-to-contact.
Zhang, Haijie; Zhao, Jianguo
2017-01-31
Marvelous vision based dynamic behaviors of insects and birds such as perching, landing, and obstacle avoidance have inspired scientists to propose the idea of time-to-contact, which is defined as the time for a moving observer to contact an object or surface if the current velocity is maintained. Since with only a vision sensor, time-to-contact can be directly estimated from consecutive images, it is widely used for a variety of robots to fulfill various tasks such as obstacle avoidance, docking, chasing, perching and landing. However, most of existing methods to estimate the time-to-contact need to extract and track features during the control process, which is time-consuming and cannot be applied to robots with limited computation power. In this paper, we adopt a featureless estimation method, extend this method to more general settings with angular velocities, and improve the estimation results using Kalman filtering. Further, we design an error based controller with gain scheduling strategy to control the motion of mobile robots. Experiments for both estimation and control are conducted using a customized mobile robot platform with low-cost embedded systems. Onboard experimental results demonstrate the effectiveness of the proposed approach, with the robot being controlled to successfully dock in front of a vertical wall. The estimation and control methods presented in this paper can be applied to computation-constrained miniature robots for agile locomotion such as landing, docking, or navigation.
Trainer, Asa; Hedberg, Thomas; Feeney, Allison Barnard; Fischer, Kevin; Rosche, Phil
2016-01-01
Advances in information technology triggered a digital revolution that holds promise of reduced costs, improved productivity, and higher quality. To ride this wave of innovation, manufacturing enterprises are changing how product definitions are communicated - from paper to models. To achieve industry's vision of the Model-Based Enterprise (MBE), the MBE strategy must include model-based data interoperability from design to manufacturing and quality in the supply chain. The Model-Based Definition (MBD) is created by the original equipment manufacturer (OEM) using Computer-Aided Design (CAD) tools. This information is then shared with the supplier so that they can manufacture and inspect the physical parts. Today, suppliers predominantly use Computer-Aided Manufacturing (CAM) and Coordinate Measuring Machine (CMM) models for these tasks. Traditionally, the OEM has provided design data to the supplier in the form of two-dimensional (2D) drawings, but may also include a three-dimensional (3D)-shape-geometry model, often in a standards-based format such as ISO 10303-203:2011 (STEP AP203). The supplier then creates the respective CAM and CMM models and machine programs to produce and inspect the parts. In the MBE vision for model-based data exchange, the CAD model must include product-and-manufacturing information (PMI) in addition to the shape geometry. Today's CAD tools can generate models with embedded PMI. And, with the emergence of STEP AP242, a standards-based model with embedded PMI can now be shared downstream. The on-going research detailed in this paper seeks to investigate three concepts. First, that the ability to utilize a STEP AP242 model with embedded PMI for CAD-to-CAM and CAD-to-CMM data exchange is possible and valuable to the overall goal of a more efficient process. Second, the research identifies gaps in tools, standards, and processes that inhibit industry's ability to cost-effectively achieve model-based-data interoperability in the pursuit of the MBE vision. Finally, it also seeks to explore the interaction between CAD and CMM processes and determine if the concept of feedback from CAM and CMM back to CAD is feasible. The main goal of our study is to test the hypothesis that model-based-data interoperability from CAD-to-CAM and CAD-to-CMM is feasible through standards-based integration. This paper presents several barriers to model-based-data interoperability. Overall, the project team demonstrated the exchange of product definition data between CAD, CAM, and CMM systems using standards-based methods. While gaps in standards coverage were identified, the gaps should not stop industry's progress toward MBE. The results of our study provide evidence in support of an open-standards method to model-based-data interoperability, which would provide maximum value and impact to industry.
Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Choudhary, Alok Nidhi
1989-01-01
Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.
Driver Distraction Using Visual-Based Sensors and Algorithms.
Fernández, Alberto; Usamentiaga, Rubén; Carús, Juan Luis; Casado, Rubén
2016-10-28
Driver distraction, defined as the diversion of attention away from activities critical for safe driving toward a competing activity, is increasingly recognized as a significant source of injuries and fatalities on the roadway. Additionally, the trend towards increasing the use of in-vehicle information systems is critical because they induce visual, biomechanical and cognitive distraction and may affect driving performance in qualitatively different ways. Non-intrusive methods are strongly preferred for monitoring distraction, and vision-based systems have appeared to be attractive for both drivers and researchers. Biomechanical, visual and cognitive distractions are the most commonly detected types in video-based algorithms. Many distraction detection systems only use a single visual cue and therefore, they may be easily disturbed when occlusion or illumination changes appear. Moreover, the combination of these visual cues is a key and challenging aspect in the development of robust distraction detection systems. These visual cues can be extracted mainly by using face monitoring systems but they should be completed with more visual cues (e.g., hands or body information) or even, distraction detection from specific actions (e.g., phone usage). Additionally, these algorithms should be included in an embedded device or system inside a car. This is not a trivial task and several requirements must be taken into account: reliability, real-time performance, low cost, small size, low power consumption, flexibility and short time-to-market. The key points for the development and implementation of sensors to carry out the detection of distraction will also be reviewed. This paper shows a review of the role of computer vision technology applied to the development of monitoring systems to detect distraction. Some key points considered as both future work and challenges ahead yet to be solved will also be addressed.
Driver Distraction Using Visual-Based Sensors and Algorithms
Fernández, Alberto; Usamentiaga, Rubén; Carús, Juan Luis; Casado, Rubén
2016-01-01
Driver distraction, defined as the diversion of attention away from activities critical for safe driving toward a competing activity, is increasingly recognized as a significant source of injuries and fatalities on the roadway. Additionally, the trend towards increasing the use of in-vehicle information systems is critical because they induce visual, biomechanical and cognitive distraction and may affect driving performance in qualitatively different ways. Non-intrusive methods are strongly preferred for monitoring distraction, and vision-based systems have appeared to be attractive for both drivers and researchers. Biomechanical, visual and cognitive distractions are the most commonly detected types in video-based algorithms. Many distraction detection systems only use a single visual cue and therefore, they may be easily disturbed when occlusion or illumination changes appear. Moreover, the combination of these visual cues is a key and challenging aspect in the development of robust distraction detection systems. These visual cues can be extracted mainly by using face monitoring systems but they should be completed with more visual cues (e.g., hands or body information) or even, distraction detection from specific actions (e.g., phone usage). Additionally, these algorithms should be included in an embedded device or system inside a car. This is not a trivial task and several requirements must be taken into account: reliability, real-time performance, low cost, small size, low power consumption, flexibility and short time-to-market. The key points for the development and implementation of sensors to carry out the detection of distraction will also be reviewed. This paper shows a review of the role of computer vision technology applied to the development of monitoring systems to detect distraction. Some key points considered as both future work and challenges ahead yet to be solved will also be addressed. PMID:27801822
Mocanu, Bogdan; Tapu, Ruxandra; Zaharia, Titus
2016-01-01
In the most recent report published by the World Health Organization concerning people with visual disabilities it is highlighted that by the year 2020, worldwide, the number of completely blind people will reach 75 million, while the number of visually impaired (VI) people will rise to 250 million. Within this context, the development of dedicated electronic travel aid (ETA) systems, able to increase the safe displacement of VI people in indoor/outdoor spaces, while providing additional cognition of the environment becomes of outmost importance. This paper introduces a novel wearable assistive device designed to facilitate the autonomous navigation of blind and VI people in highly dynamic urban scenes. The system exploits two independent sources of information: ultrasonic sensors and the video camera embedded in a regular smartphone. The underlying methodology exploits computer vision and machine learning techniques and makes it possible to identify accurately both static and highly dynamic objects existent in a scene, regardless on their location, size or shape. In addition, the proposed system is able to acquire information about the environment, semantically interpret it and alert users about possible dangerous situations through acoustic feedback. To determine the performance of the proposed methodology we have performed an extensive objective and subjective experimental evaluation with the help of 21 VI subjects from two blind associations. The users pointed out that our prototype is highly helpful in increasing the mobility, while being friendly and easy to learn. PMID:27801834
Mocanu, Bogdan; Tapu, Ruxandra; Zaharia, Titus
2016-10-28
In the most recent report published by the World Health Organization concerning people with visual disabilities it is highlighted that by the year 2020, worldwide, the number of completely blind people will reach 75 million, while the number of visually impaired (VI) people will rise to 250 million. Within this context, the development of dedicated electronic travel aid (ETA) systems, able to increase the safe displacement of VI people in indoor/outdoor spaces, while providing additional cognition of the environment becomes of outmost importance. This paper introduces a novel wearable assistive device designed to facilitate the autonomous navigation of blind and VI people in highly dynamic urban scenes. The system exploits two independent sources of information: ultrasonic sensors and the video camera embedded in a regular smartphone. The underlying methodology exploits computer vision and machine learning techniques and makes it possible to identify accurately both static and highly dynamic objects existent in a scene, regardless on their location, size or shape. In addition, the proposed system is able to acquire information about the environment, semantically interpret it and alert users about possible dangerous situations through acoustic feedback. To determine the performance of the proposed methodology we have performed an extensive objective and subjective experimental evaluation with the help of 21 VI subjects from two blind associations. The users pointed out that our prototype is highly helpful in increasing the mobility, while being friendly and easy to learn.
Biofuels and the role of space in sustainable innovation journeys☆
Raman, Sujatha; Mohr, Alison
2014-01-01
This paper aims to identify the lessons that should be learnt from how biofuels have been envisioned from the aftermath of the oil shocks of the 1970s to the present, and how these visions compare with biofuel production networks emerging in the 2000s. Working at the interface of sustainable innovation journey research and geographical theories on the spatial unevenness of sustainability transition projects, we show how the biofuels controversy is linked to characteristics of globalised industrial agricultural systems. The legitimacy problems of biofuels cannot be addressed by sustainability indicators or new technologies alone since they arise from the spatial ordering of biofuel production. In the 1970–80s, promoters of bioenergy anticipated current concerns about food security implications but envisioned bioenergy production to be territorially embedded at national or local scales where these issues would be managed. Where the territorial and scalar vision was breached, it was to imagine poorer countries exporting higher-value biofuel to the North rather than the raw material as in the controversial global biomass commodity chains of today. However, controversy now extends to the global impacts of national biofuel systems on food security and greenhouse gas emissions, and to their local impacts becoming more widely known. South/South and North/North trade conflicts are also emerging as are questions over biodegradable wastes and agricultural residues as global commodities. As assumptions of a food-versus-fuel conflict have come to be challenged, legitimacy questions over global agri-business and trade are spotlighted even further. In this context, visions of biofuel development that address these broader issues might be promising. These include large-scale biomass-for-fuel models in Europe that would transform global trade rules to allow small farmers in the global South to compete, and small-scale biofuel systems developed to address local energy needs in the South. PMID:24748726
Biofuels and the role of space in sustainable innovation journeys.
Raman, Sujatha; Mohr, Alison
2014-02-15
This paper aims to identify the lessons that should be learnt from how biofuels have been envisioned from the aftermath of the oil shocks of the 1970s to the present, and how these visions compare with biofuel production networks emerging in the 2000s. Working at the interface of sustainable innovation journey research and geographical theories on the spatial unevenness of sustainability transition projects, we show how the biofuels controversy is linked to characteristics of globalised industrial agricultural systems. The legitimacy problems of biofuels cannot be addressed by sustainability indicators or new technologies alone since they arise from the spatial ordering of biofuel production. In the 1970-80s, promoters of bioenergy anticipated current concerns about food security implications but envisioned bioenergy production to be territorially embedded at national or local scales where these issues would be managed. Where the territorial and scalar vision was breached, it was to imagine poorer countries exporting higher-value biofuel to the North rather than the raw material as in the controversial global biomass commodity chains of today. However, controversy now extends to the global impacts of national biofuel systems on food security and greenhouse gas emissions, and to their local impacts becoming more widely known. South/South and North/North trade conflicts are also emerging as are questions over biodegradable wastes and agricultural residues as global commodities. As assumptions of a food-versus-fuel conflict have come to be challenged, legitimacy questions over global agri-business and trade are spotlighted even further. In this context, visions of biofuel development that address these broader issues might be promising. These include large-scale biomass-for-fuel models in Europe that would transform global trade rules to allow small farmers in the global South to compete, and small-scale biofuel systems developed to address local energy needs in the South.
Pervasive Monitoring—An Intelligent Sensor Pod Approach for Standardised Measurement Infrastructures
Resch, Bernd; Mittlboeck, Manfred; Lippautz, Michael
2010-01-01
Geo-sensor networks have traditionally been built up in closed monolithic systems, thus limiting trans-domain usage of real-time measurements. This paper presents the technical infrastructure of a standardised embedded sensing device, which has been developed in the course of the Live Geography approach. The sensor pod implements data provision standards of the Sensor Web Enablement initiative, including an event-based alerting mechanism and location-aware Complex Event Processing functionality for detection of threshold transgression and quality assurance. The goal of this research is that the resultant highly flexible sensing architecture will bring sensor network applications one step further towards the realisation of the vision of a “digital skin for planet earth”. The developed infrastructure can potentially have far-reaching impacts on sensor-based monitoring systems through the deployment of ubiquitous and fine-grained sensor networks. This in turn allows for the straight-forward use of live sensor data in existing spatial decision support systems to enable better-informed decision-making. PMID:22163537
Resch, Bernd; Mittlboeck, Manfred; Lippautz, Michael
2010-01-01
Geo-sensor networks have traditionally been built up in closed monolithic systems, thus limiting trans-domain usage of real-time measurements. This paper presents the technical infrastructure of a standardised embedded sensing device, which has been developed in the course of the Live Geography approach. The sensor pod implements data provision standards of the Sensor Web Enablement initiative, including an event-based alerting mechanism and location-aware Complex Event Processing functionality for detection of threshold transgression and quality assurance. The goal of this research is that the resultant highly flexible sensing architecture will bring sensor network applications one step further towards the realisation of the vision of a "digital skin for planet earth". The developed infrastructure can potentially have far-reaching impacts on sensor-based monitoring systems through the deployment of ubiquitous and fine-grained sensor networks. This in turn allows for the straight-forward use of live sensor data in existing spatial decision support systems to enable better-informed decision-making.
NASA Astrophysics Data System (ADS)
Moody, Marc; Fisher, Robert; Little, J. Kristin
2014-06-01
Boeing has developed a degraded visual environment navigational aid that is flying on the Boeing AH-6 light attack helicopter. The navigational aid is a two dimensional software digital map underlay generated by the Boeing™ Geospatial Embedded Mapping Software (GEMS) and fully integrated with the operational flight program. The page format on the aircraft's multi function displays (MFD) is termed the Approach page. The existing work utilizes Digital Terrain Elevation Data (DTED) and OpenGL ES 2.0 graphics capabilities to compute the pertinent graphics underlay entirely on the graphics processor unit (GPU) within the AH-6 mission computer. The next release will incorporate cultural databases containing Digital Vertical Obstructions (DVO) to warn the crew of towers, buildings, and power lines when choosing an opportune landing site. Future IRAD will include Light Detection and Ranging (LIDAR) point cloud generating sensors to provide 2D and 3D synthetic vision on the final approach to the landing zone. Collision detection with respect to terrain, cultural, and point cloud datasets may be used to further augment the crew warning system. The techniques for creating the digital map underlay leverage the GPU almost entirely, making this solution viable on most embedded mission computing systems with an OpenGL ES 2.0 capable GPU. This paper focuses on the AH-6 crew interface process for determining a landing zone and flying the aircraft to it.
NASA Technical Reports Server (NTRS)
Bandhil, Pavan; Chitikeshi, Sanjeevi; Mahajan, Ajay; Figueroa, Fernando
2005-01-01
This paper proposes the development of intelligent sensors as part of an integrated systems approach, i.e. one treats the sensors as a complete system with its own sensing hardware (the traditional sensor), A/D converters, processing and storage capabilities, software drivers, self-assessment algorithms, communication protocols and evolutionary methodologies that allow them to get better with time. Under a project being undertaken at the NASA s Stennis Space Center, an integrated framework is being developed for the intelligent monitoring of smart elements. These smart elements can be sensors, actuators or other devices. The immediate application is the monitoring of the rocket test stands, but the technology should be generally applicable to the Integrated Systems Health Monitoring (ISHM) vision. This paper outlines progress made in the development of intelligent sensors by describing the work done till date on Physical Intelligent Sensors (PIS). The PIS discussed here consists of a thermocouple used to read temperature in an analog form which is then converted into digital values. A microprocessor collects the sensor readings and runs numerous embedded event detection routines on the collected data and if any event is detected, it is reported, stored and sent to a remote system through an Ethernet connection. Hence the output of the PIS is data coupled with confidence factor in the reliability of the data which leads to information on the health of the sensor at all times. All protocols are consistent with IEEE 1451.X standards. This work lays the foundation for the next generation of smart devices that have embedded intelligence for distributed decision making capabilities.
Optimization of image processing algorithms on mobile platforms
NASA Astrophysics Data System (ADS)
Poudel, Pramod; Shirvaikar, Mukul
2011-03-01
This work presents a technique to optimize popular image processing algorithms on mobile platforms such as cell phones, net-books and personal digital assistants (PDAs). The increasing demand for video applications like context-aware computing on mobile embedded systems requires the use of computationally intensive image processing algorithms. The system engineer has a mandate to optimize them so as to meet real-time deadlines. A methodology to take advantage of the asymmetric dual-core processor, which includes an ARM and a DSP core supported by shared memory, is presented with implementation details. The target platform chosen is the popular OMAP 3530 processor for embedded media systems. It has an asymmetric dual-core architecture with an ARM Cortex-A8 and a TMS320C64x Digital Signal Processor (DSP). The development platform was the BeagleBoard with 256 MB of NAND RAM and 256 MB SDRAM memory. The basic image correlation algorithm is chosen for benchmarking as it finds widespread application for various template matching tasks such as face-recognition. The basic algorithm prototypes conform to OpenCV, a popular computer vision library. OpenCV algorithms can be easily ported to the ARM core which runs a popular operating system such as Linux or Windows CE. However, the DSP is architecturally more efficient at handling DFT algorithms. The algorithms are tested on a variety of images and performance results are presented measuring the speedup obtained due to dual-core implementation. A major advantage of this approach is that it allows the ARM processor to perform important real-time tasks, while the DSP addresses performance-hungry algorithms.
Modeling the 360° Innovating Firm as a Multiple System or Collective Being
NASA Astrophysics Data System (ADS)
Bouchard, Véronique
Confronted with fast changing technologies and markets and with increasing competitive pressures, firms are now required to innovate fast and continuously. In order to do so, several firms superpose an intrapreneurial layer (IL) to their formal organization (FO). The two systems are in complex relations: the IL is embedded in the FO, sharing human, financial and technical components, but strongly diverges from it when it comes to representation, structure, values and behavior of the shared components. Furthermore, the two systems simultaneously cooperate and compete. In the long run, the organizational dynamics usually end to the detriment of the intrapreneurial layer, which remains marginal or regresses after an initial period of boom. The concepts of Multiple Systems and Collective Beings, proposed by Minati and Pessa, can help students of the firm adopt a different viewpoint on this issue. These concepts can help them move away from a rigid, Manichean view of the two systems' respective functions and roles towards a more fluid and elaborate vision of their relations, allowing for greater flexibility and coherence.
Hi-Vision telecine system using pickup tube
NASA Astrophysics Data System (ADS)
Iijima, Goro
1992-08-01
Hi-Vision broadcasting, offering far more lifelike pictures than those produced by existing television broadcasting systems, has enormous potential in both industrial and commercial fields. The dissemination of the Hi-Vision system will enable vivid, movie theater quality pictures to be readily enjoyed in homes in the near future. To convert motion film pictures into Hi-Vision signals, a telecine system is needed. The Hi-Vision telecine systems currently under development are the "laser telecine," "flying-spot telecine," and "Saticon telecine" systems. This paper provides an overview of the pickup tube type Hi-Vision telecine system (referred to herein as the Saticon telecine system) developed and marketed by Ikegami Tsushinki Co., Ltd.
Energy efficiency of task allocation for embedded JPEG systems.
Fan, Yang-Hsin; Wu, Jan-Ou; Wang, San-Fu
2014-01-01
Embedded system works everywhere for repeatedly performing a few particular functionalities. Well-known products include consumer electronics, smart home applications, and telematics device, and so forth. Recently, developing methodology of embedded systems is applied to conduct the design of cloud embedded system resulting in the applications of embedded system being more diverse. However, the more energy consumes result from the more embedded system works. This study presents hyperrectangle technology (HT) to embedded system for obtaining energy saving. The HT adopts drift effect to construct embedded systems with more hardware circuits than software components or vice versa. It can fast construct embedded system with a set of hardware circuits and software components. Moreover, it has a great benefit to fast explore energy consumption for various embedded systems. The effects are presented by assessing a JPEG benchmarks. Experimental results demonstrate that the HT, respectively, achieves the energy saving by 29.84%, 2.07%, and 68.80% on average to GA, GHO, and Lin.
Energy Efficiency of Task Allocation for Embedded JPEG Systems
2014-01-01
Embedded system works everywhere for repeatedly performing a few particular functionalities. Well-known products include consumer electronics, smart home applications, and telematics device, and so forth. Recently, developing methodology of embedded systems is applied to conduct the design of cloud embedded system resulting in the applications of embedded system being more diverse. However, the more energy consumes result from the more embedded system works. This study presents hyperrectangle technology (HT) to embedded system for obtaining energy saving. The HT adopts drift effect to construct embedded systems with more hardware circuits than software components or vice versa. It can fast construct embedded system with a set of hardware circuits and software components. Moreover, it has a great benefit to fast explore energy consumption for various embedded systems. The effects are presented by assessing a JPEG benchmarks. Experimental results demonstrate that the HT, respectively, achieves the energy saving by 29.84%, 2.07%, and 68.80% on average to GA, GHO, and Lin. PMID:24982983
Embedded Web Technology: Applying World Wide Web Standards to Embedded Systems
NASA Technical Reports Server (NTRS)
Ponyik, Joseph G.; York, David W.
2002-01-01
Embedded Systems have traditionally been developed in a highly customized manner. The user interface hardware and software along with the interface to the embedded system are typically unique to the system for which they are built, resulting in extra cost to the system in terms of development time and maintenance effort. World Wide Web standards have been developed in the passed ten years with the goal of allowing servers and clients to intemperate seamlessly. The client and server systems can consist of differing hardware and software platforms but the World Wide Web standards allow them to interface without knowing about the details of system at the other end of the interface. Embedded Web Technology is the merging of Embedded Systems with the World Wide Web. Embedded Web Technology decreases the cost of developing and maintaining the user interface by allowing the user to interface to the embedded system through a web browser running on a standard personal computer. Embedded Web Technology can also be used to simplify an Embedded System's internal network.
A Practical Solution Using A New Approach To Robot Vision
NASA Astrophysics Data System (ADS)
Hudson, David L.
1984-01-01
Up to now, robot vision systems have been designed to serve both application development and operational needs in inspection, assembly and material handling. This universal approach to robot vision is too costly for many practical applications. A new industrial vision system separates the function of application program development from on-line operation. A Vision Development System (VDS) is equipped with facilities designed to simplify and accelerate the application program development process. A complimentary but lower cost Target Application System (TASK) runs the application program developed with the VDS. This concept is presented in the context of an actual robot vision application that improves inspection and assembly for a manufacturer of electronic terminal keyboards. Applications developed with a VDS experience lower development cost when compared with conventional vision systems. Since the TASK processor is not burdened with development tools, it can be installed at a lower cost than comparable "universal" vision systems that are intended to be used for both development and on-line operation. The VDS/TASK approach opens more industrial applications to robot vision that previously were not practical because of the high cost of vision systems. Although robot vision is a new technology, it has been applied successfully to a variety of industrial needs in inspection, manufacturing, and material handling. New developments in robot vision technology are creating practical, cost effective solutions for a variety of industrial needs. A year or two ago, researchers and robot manufacturers interested in implementing a robot vision application could take one of two approaches. The first approach was to purchase all the necessary vision components from various sources. That meant buying an image processor from one company, a camera from another and lens and light sources from yet others. The user then had to assemble the pieces, and in most instances he had to write all of his own software to test, analyze and process the vision application. The second and most common approach was to contract with the vision equipment vendor for the development and installation of a turnkey inspection or manufacturing system. The robot user and his company paid a premium for their vision system in an effort to assure the success of the system. Since 1981, emphasis on robotics has skyrocketed. New groups have been formed in many manufacturing companies with the charter to learn about, test and initially apply new robot and automation technologies. Machine vision is one of new technologies being tested and applied. This focused interest has created a need for a robot vision system that makes it easy for manufacturing engineers to learn about, test, and implement a robot vision application. A newly developed vision system addresses those needs. Vision Development System (VDS) is a complete hardware and software product for the development and testing of robot vision applications. A complimentary, low cost Target Application System (TASK) runs the application program developed with the VDS. An actual robot vision application that demonstrates inspection and pre-assembly for keyboard manufacturing is used to illustrate the VDS/TASK approach.
Basic design principles of colorimetric vision systems
NASA Astrophysics Data System (ADS)
Mumzhiu, Alex M.
1998-10-01
Color measurement is an important part of overall production quality control in textile, coating, plastics, food, paper and other industries. The color measurement instruments such as colorimeters and spectrophotometers, used for production quality control have many limitations. In many applications they cannot be used for a variety of reasons and have to be replaced with human operators. Machine vision has great potential for color measurement. The components for color machine vision systems, such as broadcast quality 3-CCD cameras, fast and inexpensive PCI frame grabbers, and sophisticated image processing software packages are available. However the machine vision industry has only started to approach the color domain. The few color machine vision systems on the market, produced by the largest machine vision manufacturers have very limited capabilities. A lack of understanding that a vision based color measurement system could fail if it ignores the basic principles of colorimetry is the main reason for the slow progress of color vision systems. the purpose of this paper is to clarify how color measurement principles have to be applied to vision systems and how the electro-optical design features of colorimeters have to be modified in order to implement them for vision systems. The subject of this presentation far exceeds the limitations of a journal paper so only the most important aspects will be discussed. An overview of the major areas of applications for colorimetric vision system will be discussed. Finally, the reasons why some customers are happy with their vision systems and some are not will be analyzed.
Computational approaches to vision
NASA Technical Reports Server (NTRS)
Barrow, H. G.; Tenenbaum, J. M.
1986-01-01
Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.
RESLanjut: The learning media for improve students understanding in embedded systems
NASA Astrophysics Data System (ADS)
Indrianto, Susanti, Meilia Nur Indah; Karina, Djunaidi
2017-08-01
The use of network in embedded system can be done with many kinds of network, with the use of mobile phones, bluetooths, modems, ethernet cards, wireless technology and so on. Using network in embedded system could help people to do remote controlling. On previous research, researchers found that many students have the ability to comprehend the basic concept of embedded system. They could also make embedded system tools but without network integration. And for that, a development is needed for the embedded system module. The embedded system practicum module design needs a prototype method in order to achieve the desired goal. The prototype method is often used in the real world. Or even, a prototype method is a part of products that consist of logic expression or external physical interface. The embedded system practicum module is meant to increase student comprehension of embedded system course, and also to encourage students to innovate on technology based tools. It is also meant to help teachers to teach the embedded system concept on the course. The student comprehension is hoped to increase with the use of practicum course.
Automated Mobile System for Accurate Outdoor Tree Crop Enumeration Using an Uncalibrated Camera.
Nguyen, Thuy Tuong; Slaughter, David C; Hanson, Bradley D; Barber, Andrew; Freitas, Amy; Robles, Daniel; Whelan, Erin
2015-07-28
This paper demonstrates an automated computer vision system for outdoor tree crop enumeration in a seedling nursery. The complete system incorporates both hardware components (including an embedded microcontroller, an odometry encoder, and an uncalibrated digital color camera) and software algorithms (including microcontroller algorithms and the proposed algorithm for tree crop enumeration) required to obtain robust performance in a natural outdoor environment. The enumeration system uses a three-step image analysis process based upon: (1) an orthographic plant projection method integrating a perspective transform with automatic parameter estimation; (2) a plant counting method based on projection histograms; and (3) a double-counting avoidance method based on a homography transform. Experimental results demonstrate the ability to count large numbers of plants automatically with no human effort. Results show that, for tree seedlings having a height up to 40 cm and a within-row tree spacing of approximately 10 cm, the algorithms successfully estimated the number of plants with an average accuracy of 95.2% for trees within a single image and 98% for counting of the whole plant population in a large sequence of images.
Automated Mobile System for Accurate Outdoor Tree Crop Enumeration Using an Uncalibrated Camera
Nguyen, Thuy Tuong; Slaughter, David C.; Hanson, Bradley D.; Barber, Andrew; Freitas, Amy; Robles, Daniel; Whelan, Erin
2015-01-01
This paper demonstrates an automated computer vision system for outdoor tree crop enumeration in a seedling nursery. The complete system incorporates both hardware components (including an embedded microcontroller, an odometry encoder, and an uncalibrated digital color camera) and software algorithms (including microcontroller algorithms and the proposed algorithm for tree crop enumeration) required to obtain robust performance in a natural outdoor environment. The enumeration system uses a three-step image analysis process based upon: (1) an orthographic plant projection method integrating a perspective transform with automatic parameter estimation; (2) a plant counting method based on projection histograms; and (3) a double-counting avoidance method based on a homography transform. Experimental results demonstrate the ability to count large numbers of plants automatically with no human effort. Results show that, for tree seedlings having a height up to 40 cm and a within-row tree spacing of approximately 10 cm, the algorithms successfully estimated the number of plants with an average accuracy of 95.2% for trees within a single image and 98% for counting of the whole plant population in a large sequence of images. PMID:26225982
Biomimetic machine vision system.
Harman, William M; Barrett, Steven F; Wright, Cameron H G; Wilcox, Michael
2005-01-01
Real-time application of digital imaging for use in machine vision systems has proven to be prohibitive when used within control systems that employ low-power single processors without compromising the scope of vision or resolution of captured images. Development of a real-time machine analog vision system is the focus of research taking place at the University of Wyoming. This new vision system is based upon the biological vision system of the common house fly. Development of a single sensor is accomplished, representing a single facet of the fly's eye. This new sensor is then incorporated into an array of sensors capable of detecting objects and tracking motion in 2-D space. This system "preprocesses" incoming image data resulting in minimal data processing to determine the location of a target object. Due to the nature of the sensors in the array, hyperacuity is achieved thereby eliminating resolutions issues found in digital vision systems. In this paper, we will discuss the biological traits of the fly eye and the specific traits that led to the development of this machine vision system. We will also discuss the process of developing an analog based sensor that mimics the characteristics of interest in the biological vision system. This paper will conclude with a discussion of how an array of these sensors can be applied toward solving real-world machine vision issues.
Color image processing and vision system for an automated laser paint-stripping system
NASA Astrophysics Data System (ADS)
Hickey, John M., III; Hise, Lawson
1994-10-01
Color image processing in machine vision systems has not gained general acceptance. Most machine vision systems use images that are shades of gray. The Laser Automated Decoating System (LADS) required a vision system which could discriminate between substrates of various colors and textures and paints ranging from semi-gloss grays to high gloss red, white and blue (Air Force Thunderbirds). The changing lighting levels produced by the pulsed CO2 laser mandated a vision system that did not require a constant color temperature lighting for reliable image analysis.
Strategic planning in a complex academic environment: lessons from one academic health center.
Levinson, Wendy; Axler, Helena
2007-08-01
Leaders in academic health centers (AHCs) must create a vision for their academic unit embedded in a complex environment. A formal strategic planning process can be valuable to help shape a clear vision taking advantage of potential collaborations and to develop specific achievable long- and short-term goals. The authors describe the steps in a formal strategic planning process and illustrate it with the example of the Department of Medicine at the University of Toronto Faculty of Medicine beginning in 2004. The process included the active participation of over 300 faculty members, trainees, and stakeholders of the department and resulted in broad-based support and leadership for the resulting plan. The authors describe the steps, which include getting started, committing to planning principles, establishing the work plan, understanding the environment, pulling it all together, shaping the vision, testing strategic directions, building effective implementation, and promoting the plan. Articulation of vision, mission, and values informed the plan's development, as well as 10 key principles integral to the plan. Challenges and lessons learned are also described. The final strategic plan is an active core activity of the department, guiding decisions and resource allocation and facilitating measurement of success or shortcomings. The process the authors describe is applicable to multiple academic units, including divisions/sections, departments, or thematic programs in AHCs.
Post–Vision and Change: Do We Know How to Change?
D’Avanzo, Charlene
2013-01-01
The scale and importance of Vision and Change in Undergraduate Biology Education: A Call to Action challenges us to ask fundamental questions about widespread transformation of college biology instruction. I propose that we have clarified the “vision” but lack research-based models and evidence needed to guide the “change.” To support this claim, I focus on several key topics, including evidence about effective use of active-teaching pedagogy by typical faculty and whether certain programs improve students’ understanding of the Vision and Change core concepts. Program evaluation is especially problematic. While current education research and theory should inform evaluation, several prominent biology faculty–development programs continue to rely on self-reporting by faculty and students. Science, technology, engineering, and mathematics (STEM) faculty-development overviews can guide program design. Such studies highlight viewing faculty members as collaborators, embedding rewards faculty value, and characteristics of effective faculty-development learning communities. A recent National Research Council report on discipline-based STEM education research emphasizes the need for long-term faculty development and deep conceptual change in teaching and learning as the basis for genuine transformation of college instruction. Despite the progress evident in Vision and Change, forward momentum will likely be limited, because we lack evidence-based, reliable models for actually realizing the desired “change.” PMID:24006386
The use of higher-order statistics in rapid object categorization in natural scenes.
Banno, Hayaki; Saiki, Jun
2015-02-04
We can rapidly and efficiently recognize many types of objects embedded in complex scenes. What information supports this object recognition is a fundamental question for understanding our visual processing. We investigated the eccentricity-dependent role of shape and statistical information for ultrarapid object categorization, using the higher-order statistics proposed by Portilla and Simoncelli (2000). Synthesized textures computed by their algorithms have the same higher-order statistics as the originals, while the global shapes were destroyed. We used the synthesized textures to manipulate the availability of shape information separately from the statistics. We hypothesized that shape makes a greater contribution to central vision than to peripheral vision and that statistics show the opposite pattern. Results did not show contributions clearly biased by eccentricity. Statistical information demonstrated a robust contribution not only in peripheral but also in central vision. For shape, the results supported the contribution in both central and peripheral vision. Further experiments revealed some interesting properties of the statistics. They are available for a limited time, attributable to the presence or absence of animals without shape, and predict how easily humans detect animals in original images. Our data suggest that when facing the time constraint of categorical processing, higher-order statistics underlie our significant performance for rapid categorization, irrespective of eccentricity. © 2015 ARVO.
Integrated Design and Implementation of Embedded Control Systems with Scilab
Ma, Longhua; Xia, Feng; Peng, Zhe
2008-01-01
Embedded systems are playing an increasingly important role in control engineering. Despite their popularity, embedded systems are generally subject to resource constraints and it is therefore difficult to build complex control systems on embedded platforms. Traditionally, the design and implementation of control systems are often separated, which causes the development of embedded control systems to be highly time-consuming and costly. To address these problems, this paper presents a low-cost, reusable, reconfigurable platform that enables integrated design and implementation of embedded control systems. To minimize the cost, free and open source software packages such as Linux and Scilab are used. Scilab is ported to the embedded ARM-Linux system. The drivers for interfacing Scilab with several communication protocols including serial, Ethernet, and Modbus are developed. Experiments are conducted to test the developed embedded platform. The use of Scilab enables implementation of complex control algorithms on embedded platforms. With the developed platform, it is possible to perform all phases of the development cycle of embedded control systems in a unified environment, thus facilitating the reduction of development time and cost. PMID:27873827
Integrated Design and Implementation of Embedded Control Systems with Scilab.
Ma, Longhua; Xia, Feng; Peng, Zhe
2008-09-05
Embedded systems are playing an increasingly important role in control engineering. Despite their popularity, embedded systems are generally subject to resource constraints and it is therefore difficult to build complex control systems on embedded platforms. Traditionally, the design and implementation of control systems are often separated, which causes the development of embedded control systems to be highly timeconsuming and costly. To address these problems, this paper presents a low-cost, reusable, reconfigurable platform that enables integrated design and implementation of embedded control systems. To minimize the cost, free and open source software packages such as Linux and Scilab are used. Scilab is ported to the embedded ARM-Linux system. The drivers for interfacing Scilab with several communication protocols including serial, Ethernet, and Modbus are developed. Experiments are conducted to test the developed embedded platform. The use of Scilab enables implementation of complex control algorithms on embedded platforms. With the developed platform, it is possible to perform all phases of the development cycle of embedded control systems in a unified environment, thus facilitating the reduction of development time and cost.
Trainer, Asa; Hedberg, Thomas; Feeney, Allison Barnard; Fischer, Kevin; Rosche, Phil
2017-01-01
Advances in information technology triggered a digital revolution that holds promise of reduced costs, improved productivity, and higher quality. To ride this wave of innovation, manufacturing enterprises are changing how product definitions are communicated – from paper to models. To achieve industry's vision of the Model-Based Enterprise (MBE), the MBE strategy must include model-based data interoperability from design to manufacturing and quality in the supply chain. The Model-Based Definition (MBD) is created by the original equipment manufacturer (OEM) using Computer-Aided Design (CAD) tools. This information is then shared with the supplier so that they can manufacture and inspect the physical parts. Today, suppliers predominantly use Computer-Aided Manufacturing (CAM) and Coordinate Measuring Machine (CMM) models for these tasks. Traditionally, the OEM has provided design data to the supplier in the form of two-dimensional (2D) drawings, but may also include a three-dimensional (3D)-shape-geometry model, often in a standards-based format such as ISO 10303-203:2011 (STEP AP203). The supplier then creates the respective CAM and CMM models and machine programs to produce and inspect the parts. In the MBE vision for model-based data exchange, the CAD model must include product-and-manufacturing information (PMI) in addition to the shape geometry. Today's CAD tools can generate models with embedded PMI. And, with the emergence of STEP AP242, a standards-based model with embedded PMI can now be shared downstream. The on-going research detailed in this paper seeks to investigate three concepts. First, that the ability to utilize a STEP AP242 model with embedded PMI for CAD-to-CAM and CAD-to-CMM data exchange is possible and valuable to the overall goal of a more efficient process. Second, the research identifies gaps in tools, standards, and processes that inhibit industry's ability to cost-effectively achieve model-based-data interoperability in the pursuit of the MBE vision. Finally, it also seeks to explore the interaction between CAD and CMM processes and determine if the concept of feedback from CAM and CMM back to CAD is feasible. The main goal of our study is to test the hypothesis that model-based-data interoperability from CAD-to-CAM and CAD-to-CMM is feasible through standards-based integration. This paper presents several barriers to model-based-data interoperability. Overall, the project team demonstrated the exchange of product definition data between CAD, CAM, and CMM systems using standards-based methods. While gaps in standards coverage were identified, the gaps should not stop industry's progress toward MBE. The results of our study provide evidence in support of an open-standards method to model-based-data interoperability, which would provide maximum value and impact to industry. PMID:28691120
NASA Astrophysics Data System (ADS)
Yu, Lingyu; Bao, Jingjing; Giurgiutiu, Victor
2004-07-01
Embedded ultrasonic structural radar (EUSR) algorithm is developed for using piezoelectric wafer active sensor (PWAS) array to detect defects within a large area of a thin-plate specimen. Signal processing techniques are used to extract the time of flight of the wave packages, and thereby to determine the location of the defects with the EUSR algorithm. In our research, the transient tone-burst wave propagation signals are generated and collected by the embedded PWAS. Then, with signal processing, the frequency contents of the signals and the time of flight of individual frequencies are determined. This paper starts with an introduction of embedded ultrasonic structural radar algorithm. Then we will describe the signal processing methods used to extract the time of flight of the wave packages. The signal processing methods being used include the wavelet denoising, the cross correlation, and Hilbert transform. Though hardware device can provide averaging function to eliminate the noise coming from the signal collection process, wavelet denoising is included to ensure better signal quality for the application in real severe environment. For better recognition of time of flight, cross correlation method is used. Hilbert transform is applied to the signals after cross correlation in order to extract the envelope of the signals. Signal processing and EUSR are both implemented by developing a graphical user-friendly interface program in LabView. We conclude with a description of our vision for applying EUSR signal analysis to structural health monitoring and embedded nondestructive evaluation. To this end, we envisage an automatic damage detection application utilizing embedded PWAS, EUSR, and advanced signal processing.
Novel system for automatic measuring diopter based on ARM circuit block
NASA Astrophysics Data System (ADS)
Xue, Feng; Zhong, Lei; Chen, Zhe; Xue, Deng-pan; Li, Xiang-ning
2009-07-01
Traditional commercial instruments utilized in vision screening programs cannot satisfy the request for real-time diopter measurement by far, and their success is limited by some defectiveness such as computer-attached, clumsy volume, and low accuracy of parameters measured, etc. In addition, astigmatic eyes cannot be determined in many devices. This paper proposes a new design of diopter measurement system based on SAMSUNG's ARM9 circuit block. There are several contributions in the design. The new developed system has not only the function of automatically measuring diopter, but also the advantages of the low cost, and especially the simplicity and portability. Besides, by placing point sources in three directions, the instrument can determine astigmatic eyes at the same time. Most of the details are introduced as the integrated design of measuring system, interface circuit of embedded system and so on. Through a preliminary experiment, it is proved that the system keeps good feasibility and validity. The maximum deviation of measurement result is 0.344D.The experimental results also demonstrate the system can provide the service needed for real-time applications. The instrument present here is expected to be widely applied in many fields such as the clinic and home healthcare.
Wearable Improved Vision System for Color Vision Deficiency Correction
Riccio, Daniel; Di Perna, Luigi; Sanniti Di Baja, Gabriella; De Nino, Maurizio; Rossi, Settimio; Testa, Francesco; Simonelli, Francesca; Frucci, Maria
2017-01-01
Color vision deficiency (CVD) is an extremely frequent vision impairment that compromises the ability to recognize colors. In order to improve color vision in a subject with CVD, we designed and developed a wearable improved vision system based on an augmented reality device. The system was validated in a clinical pilot study on 24 subjects with CVD (18 males and 6 females, aged 37.4 ± 14.2 years). The primary outcome was the improvement in the Ishihara Vision Test score with the correction proposed by our system. The Ishihara test score significantly improved (\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$p = 0.03$ \\end{document}) from 5.8 ± 3.0 without correction to 14.8 ± 5.0 with correction. Almost all patients showed an improvement in color vision, as shown by the increased test scores. Moreover, with our system, 12 subjects (50%) passed the vision color test as normal vision subjects. The development and preliminary validation of the proposed platform confirm that a wearable augmented-reality device could be an effective aid to improve color vision in subjects with CVD. PMID:28507827
McGrath, Colleen; Laliberte Rudman, Debbie; Polgar, Jan; Spafford, Marlee M; Trentham, Barry
2016-12-01
While previous research has explored the meaning of positive aging discourses from the perspective of older adults, the perspective of older adults aging with a disability has not been studied. In fact the intersection of aging and disability has been largely underexplored in both social gerontology and disability studies. This critical ethnography engaged ten older adults aging with vision loss in narrative interviews, participant observation sessions, and semi-structured in-depth interviews. The overarching objective was to understand those attributes that older adults with age-related vision loss perceive as being the markers of a 'good old age.' The authors critically examined how these markers, and their disabling effects, are situated in ageist and disablist social assumptions regarding what it means to 'age well'. The participants' descriptions of the markers of a 'good old age' were organized into five main themes: 1) maintaining independence while negotiating help; 2) responding positively to vision loss; 3) remaining active while managing risk; 4) managing expectations to be compliant, complicit, and cooperative and; 5) striving to maintain efficiency. The study findings have provided helpful insights into how the ideas and assumptions that operate in relation to disability and impairment in late life are re-produced among older adults with age-related vision loss and how older adults take on an identity that is consistent with socially embedded norms regarding what it means to 'age well'. Copyright © 2016 Elsevier Inc. All rights reserved.
Embedded wavelet-based face recognition under variable position
NASA Astrophysics Data System (ADS)
Cotret, Pascal; Chevobbe, Stéphane; Darouich, Mehdi
2015-02-01
For several years, face recognition has been a hot topic in the image processing field: this technique is applied in several domains such as CCTV, electronic devices delocking and so on. In this context, this work studies the efficiency of a wavelet-based face recognition method in terms of subject position robustness and performance on various systems. The use of wavelet transform has a limited impact on the position robustness of PCA-based face recognition. This work shows, for a well-known database (Yale face database B*), that subject position in a 3D space can vary up to 10% of the original ROI size without decreasing recognition rates. Face recognition is performed on approximation coefficients of the image wavelet transform: results are still satisfying after 3 levels of decomposition. Furthermore, face database size can be divided by a factor 64 (22K with K = 3). In the context of ultra-embedded vision systems, memory footprint is one of the key points to be addressed; that is the reason why compression techniques such as wavelet transform are interesting. Furthermore, it leads to a low-complexity face detection stage compliant with limited computation resources available on such systems. The approach described in this work is tested on three platforms from a standard x86-based computer towards nanocomputers such as RaspberryPi and SECO boards. For K = 3 and a database with 40 faces, the execution mean time for one frame is 0.64 ms on a x86-based computer, 9 ms on a SECO board and 26 ms on a RaspberryPi (B model).
Infrared sensors and systems for enhanced vision/autonomous landing applications
NASA Technical Reports Server (NTRS)
Kerr, J. Richard
1993-01-01
There exists a large body of data spanning more than two decades, regarding the ability of infrared imagers to 'see' through fog, i.e., in Category III weather conditions. Much of this data is anecdotal, highly specialized, and/or proprietary. In order to determine the efficacy and cost effectiveness of these sensors under a variety of climatic/weather conditions, there is a need for systematic data spanning a significant range of slant-path scenarios. These data should include simultaneous video recordings at visible, midwave (3-5 microns), and longwave (8-12 microns) wavelengths, with airborne weather pods that include the capability of determining the fog droplet size distributions. Existing data tend to show that infrared is more effective than would be expected from analysis and modeling. It is particularly more effective for inland (radiation) fog as compared to coastal (advection) fog, although both of these archetypes are oversimplifications. In addition, as would be expected from droplet size vs wavelength considerations, longwave outperforms midwave, in many cases by very substantial margins. Longwave also benefits from the higher level of available thermal energy at ambient temperatures. The principal attraction of midwave sensors is that staring focal plane technology is available at attractive cost-performance levels. However, longwave technology such as that developed at FLIR Systems, Inc. (FSI), has achieved high performance in small, economical, reliable imagers utilizing serial-parallel scanning techniques. In addition, FSI has developed dual-waveband systems particularly suited for enhanced vision flight testing. These systems include a substantial, embedded processing capability which can perform video-rate image enhancement and multisensor fusion. This is achieved with proprietary algorithms and includes such operations as real-time histograms, convolutions, and fast Fourier transforms.
Acuity of a Cryptochrome and Vision-Based Magnetoreception System in Birds
Solov'yov, Ilia A.; Mouritsen, Henrik; Schulten, Klaus
2010-01-01
Abstract The magnetic compass of birds is embedded in the visual system and it has been hypothesized that the primary sensory mechanism is based on a radical pair reaction. Previous models of magnetoreception have assumed that the radical pair-forming molecules are rigidly fixed in space, and this assumption has been a major objection to the suggested hypothesis. In this article, we investigate theoretically how much disorder is permitted for the radical pair-forming, protein-based magnetic compass in the eye to remain functional. Our study shows that only one rotational degree of freedom of the radical pair-forming protein needs to be partially constrained, while the other two rotational degrees of freedom do not impact the magnetoreceptive properties of the protein. The result implies that any membrane-associated protein is sufficiently restricted in its motion to function as a radical pair-based magnetoreceptor. We relate our theoretical findings to the cryptochromes, currently considered the likeliest candidate to furnish radical pair-based magnetoreception. PMID:20655831
Knowledge-based machine vision systems for space station automation
NASA Technical Reports Server (NTRS)
Ranganath, Heggere S.; Chipman, Laure J.
1989-01-01
Computer vision techniques which have the potential for use on the space station and related applications are assessed. A knowledge-based vision system (expert vision system) and the development of a demonstration system for it are described. This system implements some of the capabilities that would be necessary in a machine vision system for the robot arm of the laboratory module in the space station. A Perceptics 9200e image processor, on a host VAXstation, was used to develop the demonstration system. In order to use realistic test images, photographs of actual space shuttle simulator panels were used. The system's capabilities of scene identification and scene matching are discussed.
Biological Basis For Computer Vision: Some Perspectives
NASA Astrophysics Data System (ADS)
Gupta, Madan M.
1990-03-01
Using biology as a basis for the development of sensors, devices and computer vision systems is a challenge to systems and vision scientists. It is also a field of promising research for engineering applications. Biological sensory systems, such as vision, touch and hearing, sense different physical phenomena from our environment, yet they possess some common mathematical functions. These mathematical functions are cast into the neural layers which are distributed throughout our sensory regions, sensory information transmission channels and in the cortex, the centre of perception. In this paper, we are concerned with the study of the biological vision system and the emulation of some of its mathematical functions, both retinal and visual cortex, for the development of a robust computer vision system. This field of research is not only intriguing, but offers a great challenge to systems scientists in the development of functional algorithms. These functional algorithms can be generalized for further studies in such fields as signal processing, control systems and image processing. Our studies are heavily dependent on the the use of fuzzy - neural layers and generalized receptive fields. Building blocks of such neural layers and receptive fields may lead to the design of better sensors and better computer vision systems. It is hoped that these studies will lead to the development of better artificial vision systems with various applications to vision prosthesis for the blind, robotic vision, medical imaging, medical sensors, industrial automation, remote sensing, space stations and ocean exploration.
NASA Astrophysics Data System (ADS)
Jain, A. K.; Dorai, C.
Computer vision has emerged as a challenging and important area of research, both as an engineering and a scientific discipline. The growing importance of computer vision is evident from the fact that it was identified as one of the "Grand Challenges" and also from its prominent role in the National Information Infrastructure. While the design of a general-purpose vision system continues to be elusive machine vision systems are being used successfully in specific application elusive, machine vision systems are being used successfully in specific application domains. Building a practical vision system requires a careful selection of appropriate sensors, extraction and integration of information from available cues in the sensed data, and evaluation of system robustness and performance. The authors discuss and demonstrate advantages of (1) multi-sensor fusion, (2) combination of features and classifiers, (3) integration of visual modules, and (IV) admissibility and goal-directed evaluation of vision algorithms. The requirements of several prominent real world applications such as biometry, document image analysis, image and video database retrieval, and automatic object model construction offer exciting problems and new opportunities to design and evaluate vision algorithms.
Kuluski, Kerry; Nelson, Michelle L A; Tracy, C Shawn; Alloway, Carole Ann; Shorrock, Charles; Shearkhani, Sara; Upshur, Ross E G
2017-10-01
People's experiences can provide critical guidance on how to better meet their quality of life and care needs and deploy resources more appropriately. To maximize the utility of experience data and to advance the current debate, we present four recommendations: (1) measuring experiences outside the healthcare system can provide insight into what needs to change within the healthcare system; (2) focusing on patient experience is necessary but insufficient, (family) caregiver insights and experiences require attention and can provide insight into the needs of the patient; (3) moving from "one time/single sector" measurement of experience to iterative, ongoing measurement across sectors better reflects the true lived experience of patients (especially those with complex care needs) and their caregivers; and (4) embedding measurement within engagement-capable environments that adequately resource patients, caregivers, and providers to work together is required to move from collection to meaningful change. Applying these recommendations requires a longer-term vision, shifting from provider-centred to person-centred models of care, and a deep understanding of the structural, cultural, and normative barriers to measuring care experiences. © 2017 Longwoods Publishing.
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III; Kramer, Lynda J.; Bailey, Randall E.
2007-01-01
The use of enhanced vision systems in civil aircraft is projected to increase rapidly as the Federal Aviation Administration recently changed the aircraft operating rules under Part 91, revising the flight visibility requirements for conducting approach and landing operations. Operators conducting straight-in instrument approach procedures may now operate below the published approach minimums when using an approved enhanced flight vision system that shows the required visual references on the pilot's Head-Up Display. An experiment was conducted to evaluate the complementary use of synthetic vision systems and enhanced vision system technologies, focusing on new techniques for integration and/or fusion of synthetic and enhanced vision technologies and crew resource management while operating under these newly adopted rules. Experimental results specific to flight crew response to non-normal events using the fused synthetic/enhanced vision system are presented.
NASA Technical Reports Server (NTRS)
Joshi, Anjali; Heimdahl, Mats P. E.; Miller, Steven P.; Whalen, Mike W.
2006-01-01
System safety analysis techniques are well established and are used extensively during the design of safety-critical systems. Despite this, most of the techniques are highly subjective and dependent on the skill of the practitioner. Since these analyses are usually based on an informal system model, it is unlikely that they will be complete, consistent, and error free. In fact, the lack of precise models of the system architecture and its failure modes often forces the safety analysts to devote much of their effort to gathering architectural details about the system behavior from several sources and embedding this information in the safety artifacts such as the fault trees. This report describes Model-Based Safety Analysis, an approach in which the system and safety engineers share a common system model created using a model-based development process. By extending the system model with a fault model as well as relevant portions of the physical system to be controlled, automated support can be provided for much of the safety analysis. We believe that by using a common model for both system and safety engineering and automating parts of the safety analysis, we can both reduce the cost and improve the quality of the safety analysis. Here we present our vision of model-based safety analysis and discuss the advantages and challenges in making this approach practical.
VLSI chips for vision-based vehicle guidance
NASA Astrophysics Data System (ADS)
Masaki, Ichiro
1994-02-01
Sensor-based vehicle guidance systems are gathering rapidly increasing interest because of their potential for increasing safety, convenience, environmental friendliness, and traffic efficiency. Examples of applications include intelligent cruise control, lane following, collision warning, and collision avoidance. This paper reviews the research trends in vision-based vehicle guidance with an emphasis on VLSI chip implementations of the vision systems. As an example of VLSI chips for vision-based vehicle guidance, a stereo vision system is described in detail.
Vision Systems with the Human in the Loop
NASA Astrophysics Data System (ADS)
Bauckhage, Christian; Hanheide, Marc; Wrede, Sebastian; Käster, Thomas; Pfeiffer, Michael; Sagerer, Gerhard
2005-12-01
The emerging cognitive vision paradigm deals with vision systems that apply machine learning and automatic reasoning in order to learn from what they perceive. Cognitive vision systems can rate the relevance and consistency of newly acquired knowledge, they can adapt to their environment and thus will exhibit high robustness. This contribution presents vision systems that aim at flexibility and robustness. One is tailored for content-based image retrieval, the others are cognitive vision systems that constitute prototypes of visual active memories which evaluate, gather, and integrate contextual knowledge for visual analysis. All three systems are designed to interact with human users. After we will have discussed adaptive content-based image retrieval and object and action recognition in an office environment, the issue of assessing cognitive systems will be raised. Experiences from psychologically evaluated human-machine interactions will be reported and the promising potential of psychologically-based usability experiments will be stressed.
NASA Astrophysics Data System (ADS)
Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Bryan; Chenault, David B.; Kingston, David; Geulen, Vanilynmae; Newell, Scott; Pettijohn, Brad
2009-02-01
In this paper, we report on the development of a 3D vision system consisting of a flat panel stereoscopic display and auto-converging stereo camera and an assessment of the system's use for robotic driving, manipulation, and surveillance operations. The 3D vision system was integrated onto a Talon Robot and Operator Control Unit (OCU) such that direct comparisons of the performance of a number of test subjects using 2D and 3D vision systems were possible. A number of representative scenarios were developed to determine which tasks benefited most from the added depth perception and to understand when the 3D vision system hindered understanding of the scene. Two tests were conducted at Fort Leonard Wood, MO with noncommissioned officers ranked Staff Sergeant and Sergeant First Class. The scenarios; the test planning, approach and protocols; the data analysis; and the resulting performance assessment of the 3D vision system are reported.
Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck
2008-04-10
One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.
Manifold learning in machine vision and robotics
NASA Astrophysics Data System (ADS)
Bernstein, Alexander
2017-02-01
Smart algorithms are used in Machine vision and Robotics to organize or extract high-level information from the available data. Nowadays, Machine learning is an essential and ubiquitous tool to automate extraction patterns or regularities from data (images in Machine vision; camera, laser, and sonar sensors data in Robotics) in order to solve various subject-oriented tasks such as understanding and classification of images content, navigation of mobile autonomous robot in uncertain environments, robot manipulation in medical robotics and computer-assisted surgery, and other. Usually such data have high dimensionality, however, due to various dependencies between their components and constraints caused by physical reasons, all "feasible and usable data" occupy only a very small part in high dimensional "observation space" with smaller intrinsic dimensionality. Generally accepted model of such data is manifold model in accordance with which the data lie on or near an unknown manifold (surface) of lower dimensionality embedded in an ambient high dimensional observation space; real-world high-dimensional data obtained from "natural" sources meet, as a rule, this model. The use of Manifold learning technique in Machine vision and Robotics, which discovers a low-dimensional structure of high dimensional data and results in effective algorithms for solving of a large number of various subject-oriented tasks, is the content of the conference plenary speech some topics of which are in the paper.
NASA Technical Reports Server (NTRS)
1972-01-01
A unified approach to computer vision and manipulation is developed which is called choreographic vision. In the model, objects to be viewed by a projected robot in the Viking missions to Mars are seen as objects to be manipulated within choreographic contexts controlled by a multimoded remote, supervisory control system on Earth. A new theory of context relations is introduced as a basis for choreographic programming languages. A topological vision model is developed for recognizing objects by shape and contour. This model is integrated with a projected vision system consisting of a multiaperture image dissector TV camera and a ranging laser system. System program specifications integrate eye-hand coordination and topological vision functions and an aerospace multiprocessor implementation is described.
Reclaiming the person: intersectionality and dynamic social categories through a psychological lens.
Frazier, Kathryn E
2012-09-01
Psychology's conventionally treatment of individuals' engagement with and resistance to the societal processes in which they are embedded has come under scrutiny amid the rise of postmodernist and critical feminist perspectives (among many others) in the social sciences. A sample of social psychology's responses to these critiques is presented in the recently published book, Social Categories in Everyday Experience edited by Shaun Wiley et al. (2011). In this essay, the challenges of seriously addressing the critiques of psychology's conventional treatment of social categories, which implicate fundamental assumptions of the discipline, are discussed. Further, it is argued that in order to effectively construct psychological accounts of political activism and social change amid theories that are increasingly cognizant of the complexities and contingencies of social embeddiness, the person must be reclaimed and revisioned. Notions of agency that complement an intersectional and systemic vision of the social world are discussed.
A wonderful laboratory and a great researcher
NASA Astrophysics Data System (ADS)
Sheikh, N. M.
2004-05-01
It was great to be associated with Prof. Dr. Karl Rawer. He devoted his life to make use of the wonderful laboratory of Nature, the Ionosphere. Through acquisition of the experimental data from AEROS satellites and embedding it with data from ground stations, it was possible to achieve a better empirical model, the International Reference Ionosphere. Prof. Dr. Karl Rawer has been as dynamic as the Ionosphere. His vision about the ionospheric data is exceptional and has helped the scientific and engineering community to make use of his vision in advancing the dimensions of empirical modelling. As a human being, Prof. Dr. Karl Rawer has all the traits of an angel from Heaven. In short he developed a large team of researchers forming a blooming tree from the parent node. Ionosphere still plays an important role in over the horizon HF Radar and GPs satellite data reduction.
Stanford, T; Pollack, R H
1984-09-01
A cross-sectional study comparing response time and the percentage of items correctly identified in three color vision tests (Pflügertrident, HRR-AO pseudoisochromatic plates, and AO pseudoisochromatic plates) was carried out on 72 women (12 in each decade) ranging from ages 20 to 79 years. Overall, time scores increased across the age groups. Analysis of the correctness scores indicated that the AO pseudoisochromatic plates requiring the identification of numbers was more difficult than the other tests which consisted of geometric forms or the letter E. This differential difficulty increased as a function of age. There was no indication of color defect per se which led to the conclusion that figure complexity may be the key variable determining performance. The results were similar to those obtained by Lee and Pollack (1978) in their study of the Embedded Figures Test.
What constitutes an efficient reference frame for vision?
Tadin, Duje; Lappin, Joseph S.; Blake, Randolph; Grossman, Emily D.
2015-01-01
Vision requires a reference frame. To what extent does this reference frame depend on the structure of the visual input, rather than just on retinal landmarks? This question is particularly relevant to the perception of dynamic scenes, when keeping track of external motion relative to the retina is difficult. We tested human subjects’ ability to discriminate the motion and temporal coherence of changing elements that were embedded in global patterns and whose perceptual organization was manipulated in a way that caused only minor changes to the retinal image. Coherence discriminations were always better when local elements were perceived to be organized as a global moving form than when they were perceived to be unorganized, individually moving entities. Our results indicate that perceived form influences the neural representation of its component features, and from this, we propose a new method for studying perceptual organization. PMID:12219092
Review of battery powered embedded systems design for mission-critical low-power applications
NASA Astrophysics Data System (ADS)
Malewski, Matthew; Cowell, David M. J.; Freear, Steven
2018-06-01
The applications and uses of embedded systems is increasingly pervasive. Mission and safety critical systems relying on embedded systems pose specific challenges. Embedded systems is a multi-disciplinary domain, involving both hardware and software. Systems need to be designed in a holistic manner so that they are able to provide the desired reliability and minimise unnecessary complexity. The large problem landscape means that there is no one solution that fits all applications of embedded systems. With the primary focus of these mission and safety critical systems being functionality and reliability, there can be conflicts with business needs, and this can introduce pressures to reduce cost at the expense of reliability and functionality. This paper examines the challenges faced by battery powered systems, and then explores at more general problems, and several real-world embedded systems.
Deep hierarchies in the primate visual cortex: what can we learn for computer vision?
Krüger, Norbert; Janssen, Peter; Kalkan, Sinan; Lappe, Markus; Leonardis, Ales; Piater, Justus; Rodríguez-Sánchez, Antonio J; Wiskott, Laurenz
2013-08-01
Computational modeling of the primate visual system yields insights of potential relevance to some of the challenges that computer vision is facing, such as object recognition and categorization, motion detection and activity recognition, or vision-based navigation and manipulation. This paper reviews some functional principles and structures that are generally thought to underlie the primate visual cortex, and attempts to extract biological principles that could further advance computer vision research. Organized for a computer vision audience, we present functional principles of the processing hierarchies present in the primate visual system considering recent discoveries in neurophysiology. The hierarchical processing in the primate visual system is characterized by a sequence of different levels of processing (on the order of 10) that constitute a deep hierarchy in contrast to the flat vision architectures predominantly used in today's mainstream computer vision. We hope that the functional description of the deep hierarchies realized in the primate visual system provides valuable insights for the design of computer vision algorithms, fostering increasingly productive interaction between biological and computer vision research.
A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems
Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo
2017-01-01
Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems. PMID:28079187
A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems.
Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo
2017-01-12
Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.
Knowledge-based vision and simple visual machines.
Cliff, D; Noble, J
1997-01-01
The vast majority of work in machine vision emphasizes the representation of perceived objects and events: it is these internal representations that incorporate the 'knowledge' in knowledge-based vision or form the 'models' in model-based vision. In this paper, we discuss simple machine vision systems developed by artificial evolution rather than traditional engineering design techniques, and note that the task of identifying internal representations within such systems is made difficult by the lack of an operational definition of representation at the causal mechanistic level. Consequently, we question the nature and indeed the existence of representations posited to be used within natural vision systems (i.e. animals). We conclude that representations argued for on a priori grounds by external observers of a particular vision system may well be illusory, and are at best place-holders for yet-to-be-identified causal mechanistic interactions. That is, applying the knowledge-based vision approach in the understanding of evolved systems (machines or animals) may well lead to theories and models that are internally consistent, computationally plausible, and entirely wrong. PMID:9304684
A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems
NASA Astrophysics Data System (ADS)
Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo
2017-01-01
Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.
3D morphology reconstruction using linear array CCD binocular stereo vision imaging system
NASA Astrophysics Data System (ADS)
Pan, Yu; Wang, Jinjiang
2018-01-01
Binocular vision imaging system, which has a small field of view, cannot reconstruct the 3-D shape of the dynamic object. We found a linear array CCD binocular vision imaging system, which uses different calibration and reconstruct methods. On the basis of the binocular vision imaging system, the linear array CCD binocular vision imaging systems which has a wider field of view can reconstruct the 3-D morphology of objects in continuous motion, and the results are accurate. This research mainly introduces the composition and principle of linear array CCD binocular vision imaging system, including the calibration, capture, matching and reconstruction of the imaging system. The system consists of two linear array cameras which were placed in special arrangements and a horizontal moving platform that can pick up objects. The internal and external parameters of the camera are obtained by calibrating in advance. And then using the camera to capture images of moving objects, the results are then matched and 3-D reconstructed. The linear array CCD binocular vision imaging systems can accurately measure the 3-D appearance of moving objects, this essay is of great significance to measure the 3-D morphology of moving objects.
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, Jarvis J., III
2005-01-01
Research was conducted onboard a Gulfstream G-V aircraft to evaluate integrated Synthetic Vision System concepts during flight tests over a 6-week period at the Wallops Flight Facility and Reno/Tahoe International Airport. The NASA Synthetic Vision System incorporates database integrity monitoring, runway incursion prevention alerting, surface maps, enhanced vision sensors, and advanced pathway guidance and synthetic terrain presentation. The paper details the goals and objectives of the flight test with a focus on the situation awareness benefits of integrating synthetic vision system enabling technologies for commercial aircraft.
Research on numerical control system based on S3C2410 and MCX314AL
NASA Astrophysics Data System (ADS)
Ren, Qiang; Jiang, Tingbiao
2008-10-01
With the rapid development of micro-computer technology, embedded system, CNC technology and integrated circuits, numerical control system with powerful functions can be realized by several high-speed CPU chips and RISC (Reduced Instruction Set Computing) chips which have small size and strong stability. In addition, the real-time operating system also makes the attainment of embedded system possible. Developing the NC system based on embedded technology can overcome some shortcomings of common PC-based CNC system, such as the waste of resources, low control precision, low frequency and low integration. This paper discusses a hardware platform of ENC (Embedded Numerical Control) system based on embedded processor chip ARM (Advanced RISC Machines)-S3C2410 and DSP (Digital Signal Processor)-MCX314AL and introduces the process of developing ENC system software. Finally write the MCX314AL's driver under the embedded Linux operating system. The embedded Linux operating system can deal with multitask well moreover satisfy the real-time and reliability of movement control. NC system has the advantages of best using resources and compact system with embedded technology. It provides a wealth of functions and superior performance with a lower cost. It can be sure that ENC is the direction of the future development.
Secor, Ethan B; Smith, Jeremy; Marks, Tobin J; Hersam, Mark C
2016-07-13
Recent developments in solution-processed amorphous oxide semiconductors have established indium-gallium-zinc-oxide (IGZO) as a promising candidate for printed electronics. A key challenge for this vision is the integration of IGZO thin-film transistor (TFT) channels with compatible source/drain electrodes using low-temperature, solution-phase patterning methods. Here we demonstrate the suitability of inkjet-printed graphene electrodes for this purpose. In contrast to common inkjet-printed silver-based conductive inks, graphene provides a chemically stable electrode-channel interface. Furthermore, by embedding the graphene electrode between two consecutive IGZO printing passes, high-performance IGZO TFTs are achieved with an electron mobility of ∼6 cm(2)/V·s and current on/off ratio of ∼10(5). The resulting printed devices exhibit robust stability to aging in ambient as well as excellent resilience to thermal stress, thereby offering a promising platform for future printed electronics applications.
NASA Astrophysics Data System (ADS)
Cross, Jack; Schneider, John; Cariani, Pete
2013-05-01
Sierra Nevada Corporation (SNC) has developed rotary and fixed wing millimeter wave radar enhanced vision systems. The Helicopter Autonomous Landing System (HALS) is a rotary-wing enhanced vision system that enables multi-ship landing, takeoff, and enroute flight in Degraded Visual Environments (DVE). HALS has been successfully flight tested in a variety of scenarios, from brown-out DVE landings, to enroute flight over mountainous terrain, to wire/cable detection during low-level flight. The Radar Enhanced Vision Systems (REVS) is a fixed-wing Enhanced Flight Vision System (EFVS) undergoing prototype development testing. Both systems are based on a fast-scanning, threedimensional 94 GHz radar that produces real-time terrain and obstacle imagery. The radar imagery is fused with synthetic imagery of the surrounding terrain to form a long-range, wide field-of-view display. A symbology overlay is added to provide aircraft state information and, for HALS, approach and landing command guidance cuing. The combination of see-through imagery and symbology provides the key information a pilot needs to perform safe flight operations in DVE conditions. This paper discusses the HALS and REVS systems and technology, presents imagery, and summarizes the recent flight test results.
Real Time Target Tracking Using Dedicated Vision Hardware
NASA Astrophysics Data System (ADS)
Kambies, Keith; Walsh, Peter
1988-03-01
This paper describes a real-time vision target tracking system developed by Adaptive Automation, Inc. and delivered to NASA's Launch Equipment Test Facility, Kennedy Space Center, Florida. The target tracking system is part of the Robotic Application Development Laboratory (RADL) which was designed to provide NASA with a general purpose robotic research and development test bed for the integration of robot and sensor systems. One of the first RADL system applications is the closing of a position control loop around a six-axis articulated arm industrial robot using a camera and dedicated vision processor as the input sensor so that the robot can locate and track a moving target. The vision system is inside of the loop closure of the robot tracking system, therefore, tight throughput and latency constraints are imposed on the vision system that can only be met with specialized hardware and a concurrent approach to the processing algorithms. State of the art VME based vision boards capable of processing the image at frame rates were used with a real-time, multi-tasking operating system to achieve the performance required. This paper describes the high speed vision based tracking task, the system throughput requirements, the use of dedicated vision hardware architecture, and the implementation design details. Important to the overall philosophy of the complete system was the hierarchical and modular approach applied to all aspects of the system, hardware and software alike, so there is special emphasis placed on this topic in the paper.
Pyramidal neurovision architecture for vision machines
NASA Astrophysics Data System (ADS)
Gupta, Madan M.; Knopf, George K.
1993-08-01
The vision system employed by an intelligent robot must be active; active in the sense that it must be capable of selectively acquiring the minimal amount of relevant information for a given task. An efficient active vision system architecture that is based loosely upon the parallel-hierarchical (pyramidal) structure of the biological visual pathway is presented in this paper. Although the computational architecture of the proposed pyramidal neuro-vision system is far less sophisticated than the architecture of the biological visual pathway, it does retain some essential features such as the converging multilayered structure of its biological counterpart. In terms of visual information processing, the neuro-vision system is constructed from a hierarchy of several interactive computational levels, whereupon each level contains one or more nonlinear parallel processors. Computationally efficient vision machines can be developed by utilizing both the parallel and serial information processing techniques within the pyramidal computing architecture. A computer simulation of a pyramidal vision system for active scene surveillance is presented.
The Use of Video-Gaming Devices as a Motivation for Learning Embedded Systems Programming
ERIC Educational Resources Information Center
Gonzalez, J.; Pomares, H.; Damas, M.; Garcia-Sanchez,P.; Rodriguez-Alvarez, M.; Palomares, J. M.
2013-01-01
As embedded systems are becoming prevalent in everyday life, many universities are incorporating embedded systems-related courses in their undergraduate curricula. However, it is not easy to motivate students in such courses since they conceive of embedded systems as bizarre computing elements, different from the personal computers with which they…
FPGA Implementation of Stereo Disparity with High Throughput for Mobility Applications
NASA Technical Reports Server (NTRS)
Villalpando, Carlos Y.; Morfopolous, Arin; Matthies, Larry; Goldberg, Steven
2011-01-01
High speed stereo vision can allow unmanned robotic systems to navigate safely in unstructured terrain, but the computational cost can exceed the capacity of typical embedded CPUs. In this paper, we describe an end-to-end stereo computation co-processing system optimized for fast throughput that has been implemented on a single Virtex 4 LX160 FPGA. This system is capable of operating on images from a 1024 x 768 3CCD (true RGB) camera pair at 15 Hz. Data enters the FPGA directly from the cameras via Camera Link and is rectified, pre-filtered and converted into a disparity image all within the FPGA, incurring no CPU load. Once complete, a rectified image and the final disparity image are read out over the PCI bus, for a bandwidth cost of 68 MB/sec. Within the FPGA there are 4 distinct algorithms: Camera Link capture, Bilinear rectification, Bilateral subtraction pre-filtering and the Sum of Absolute Difference (SAD) disparity. Each module will be described in brief along with the data flow and control logic for the system. The system has been successfully fielded upon the Carnegie Mellon University's National Robotics Engineering Center (NREC) Crusher system during extensive field trials in 2007 and 2008 and is being implemented for other surface mobility systems at JPL.
Vision-based obstacle recognition system for automated lawn mower robot development
NASA Astrophysics Data System (ADS)
Mohd Zin, Zalhan; Ibrahim, Ratnawati
2011-06-01
Digital image processing techniques (DIP) have been widely used in various types of application recently. Classification and recognition of a specific object using vision system require some challenging tasks in the field of image processing and artificial intelligence. The ability and efficiency of vision system to capture and process the images is very important for any intelligent system such as autonomous robot. This paper gives attention to the development of a vision system that could contribute to the development of an automated vision based lawn mower robot. The works involve on the implementation of DIP techniques to detect and recognize three different types of obstacles that usually exist on a football field. The focus was given on the study on different types and sizes of obstacles, the development of vision based obstacle recognition system and the evaluation of the system's performance. Image processing techniques such as image filtering, segmentation, enhancement and edge detection have been applied in the system. The results have shown that the developed system is able to detect and recognize various types of obstacles on a football field with recognition rate of more 80%.
NASA Astrophysics Data System (ADS)
Skrzypek, Josef; Mesrobian, Edmond; Gungner, David J.
1989-03-01
The development of autonomous land vehicles (ALV) capable of operating in an unconstrained environment has proven to be a formidable research effort. The unpredictability of events in such an environment calls for the design of a robust perceptual system, an impossible task requiring the programming of a system bases on the expectation of future, unconstrained events. Hence, the need for a "general purpose" machine vision system that is capable of perceiving and understanding images in an unconstrained environment in real-time. The research undertaken at the UCLA Machine Perception Laboratory addresses this need by focusing on two specific issues: 1) the long term goals for machine vision research as a joint effort between the neurosciences and computer science; and 2) a framework for evaluating progress in machine vision. In the past, vision research has been carried out independently within different fields including neurosciences, psychology, computer science, and electrical engineering. Our interdisciplinary approach to vision research is based on the rigorous combination of computational neuroscience, as derived from neurophysiology and neuropsychology, with computer science and electrical engineering. The primary motivation behind our approach is that the human visual system is the only existing example of a "general purpose" vision system and using a neurally based computing substrate, it can complete all necessary visual tasks in real-time.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-20
... NUCLEAR REGULATORY COMMISSION [NRC-2013-0098] Embedded Digital Devices in Safety-Related Systems... (NRC) is issuing for public comment Draft Regulatory Issue Summary (RIS) 2013-XX, ``Embedded Digital... requirements for the quality and reliability of basic components with embedded digital devices. DATES: Submit...
Monovision techniques for telerobots
NASA Technical Reports Server (NTRS)
Goode, P. W.; Carnils, K.
1987-01-01
The primary task of the vision sensor in a telerobotic system is to provide information about the position of the system's effector relative to objects of interest in its environment. The subtasks required to perform the primary task include image segmentation, object recognition, and object location and orientation in some coordinate system. The accomplishment of the vision task requires the appropriate processing tools and the system methodology to effectively apply the tools to the subtasks. The functional structure of the telerobotic vision system used in the Langley Research Center's Intelligent Systems Research Laboratory is discussed as well as two monovision techniques for accomplishing the vision subtasks.
An integrated compact airborne multispectral imaging system using embedded computer
NASA Astrophysics Data System (ADS)
Zhang, Yuedong; Wang, Li; Zhang, Xuguo
2015-08-01
An integrated compact airborne multispectral imaging system using embedded computer based control system was developed for small aircraft multispectral imaging application. The multispectral imaging system integrates CMOS camera, filter wheel with eight filters, two-axis stabilized platform, miniature POS (position and orientation system) and embedded computer. The embedded computer has excellent universality and expansibility, and has advantages in volume and weight for airborne platform, so it can meet the requirements of control system of the integrated airborne multispectral imaging system. The embedded computer controls the camera parameters setting, filter wheel and stabilized platform working, image and POS data acquisition, and stores the image and data. The airborne multispectral imaging system can connect peripheral device use the ports of the embedded computer, so the system operation and the stored image data management are easy. This airborne multispectral imaging system has advantages of small volume, multi-function, and good expansibility. The imaging experiment results show that this system has potential for multispectral remote sensing in applications such as resource investigation and environmental monitoring.
Design of a dynamic test platform for autonomous robot vision systems
NASA Technical Reports Server (NTRS)
Rich, G. C.
1980-01-01
The concept and design of a dynamic test platform for development and evluation of a robot vision system is discussed. The platform is to serve as a diagnostic and developmental tool for future work with the RPI Mars Rover's multi laser/multi detector vision system. The platform allows testing of the vision system while its attitude is varied, statically or periodically. The vision system is mounted on the test platform. It can then be subjected to a wide variety of simulated can thus be examined in a controlled, quantitative fashion. Defining and modeling Rover motions and designing the platform to emulate these motions are also discussed. Individual aspects of the design process are treated separately, as structural, driving linkages, and motors and transmissions.
FingerScanner: Embedding a Fingerprint Scanner in a Raspberry Pi.
Sapes, Jordi; Solsona, Francesc
2016-02-06
Nowadays, researchers are paying increasing attention to embedding systems. Cost reduction has lead to an increase in the number of platforms supporting the operating system Linux, jointly with the Raspberry Pi motherboard. Thus, embedding devices on Raspberry-Linux systems is a goal in order to make competitive commercial products. This paper presents a low-cost fingerprint recognition system embedded into a Raspberry Pi with Linux.
Use of Student Experiments for Teaching Embedded Software Development Including HW/SW Co-Design
ERIC Educational Resources Information Center
Mitsui, H.; Kambe, H.; Koizumi, H.
2009-01-01
Embedded systems have been applied widely, not only to consumer products and industrial machines, but also to new applications such as ubiquitous or sensor networking. The increasing role of software (SW) in embedded system development has caused a great demand for embedded SW engineers, and university education for embedded SW engineering has…
2011-11-01
RX-TY-TR-2011-0096-01) develops a novel computer vision sensor based upon the biological vision system of the common housefly , Musca domestica...01 summarizes the development of a novel computer vision sensor based upon the biological vision system of the common housefly , Musca domestica
Research and application of embedded real-time operating system
NASA Astrophysics Data System (ADS)
Zhang, Bo
2013-03-01
In this paper, based on the analysis of existing embedded real-time operating system, the architecture of an operating system is designed and implemented. The experimental results show that the design fully complies with the requirements of embedded real-time operating system, can achieve the purposes of reducing the complexity of embedded software design and improving the maintainability, reliability, flexibility. Therefore, this design program has high practical value.
FingerScanner: Embedding a Fingerprint Scanner in a Raspberry Pi
Sapes, Jordi; Solsona, Francesc
2016-01-01
Nowadays, researchers are paying increasing attention to embedding systems. Cost reduction has lead to an increase in the number of platforms supporting the operating system Linux, jointly with the Raspberry Pi motherboard. Thus, embedding devices on Raspberry-Linux systems is a goal in order to make competitive commercial products. This paper presents a low-cost fingerprint recognition system embedded into a Raspberry Pi with Linux. PMID:26861340
Research and Design of Embedded Wireless Meal Ordering System Based on SQLite
NASA Astrophysics Data System (ADS)
Zhang, Jihong; Chen, Xiaoquan
The paper describes features and internal architecture and developing method of SQLite. And then it gives a design and program of meal ordering system. The system realizes the information interaction among the users and embedded devices with SQLite as database system. The embedded database SQLite manages the data and achieves wireless communication by using Bluetooth. A system program based on Qt/Embedded and Linux drivers realizes the local management of environmental data.
NASA Astrophysics Data System (ADS)
Zhang, De-gan; Zhang, Xiao-dan
2012-11-01
With the growth of the amount of information manipulated by embedded application systems, which are embedded into devices and offer access to the devices on the internet, the requirements of saving the information systemically is necessary so as to fulfil access from the client and the local processing more efficiently. For supporting mobile applications, a design and implementation solution of embedded un-interruptible power supply (UPS) system (in brief, EUPSS) is brought forward for long-distance monitoring and controlling of UPS based on Web. The implementation of system is based on ATmega161, RTL8019AS and Arm chips with TCP/IP protocol suite for communication. In the embedded UPS system, an embedded file system is designed and implemented which saves the data and index information on a serial EEPROM chip in a structured way and communicates with a microcontroller unit through I2C bus. By embedding the file system into UPS system or other information appliances, users can access and manipulate local data on the web client side. Embedded file system on chips will play a major role in the growth of IP networking. Based on our experiment tests, the mobile users can easily monitor and control UPS in different places of long-distance. The performance of EUPSS has satisfied the requirements of all kinds of Web-based mobile applications.
Creating a Role for Embedded Librarians Within an Active Learning Environment.
Hackman, Dawn E; Francis, Marcia J; Johnson, Erika; Nickum, Annie; Thormodson, Kelly
2017-01-01
In 2013, the librarians at a small academic health sciences library reevaluated their mission, vision, and strategic plan to expand their roles. The school was transitioning to a new pedagogical culture and a new building designed to emphasize interprofessional education and active learning methodologies. Subsequent efforts to implement the new strategic plan resulted in the librarians joining curriculum committees and other institutional initiatives, such as an Active Learning Task Force, and participating in faculty development workshops. This participation has increased visibility and led to new roles and opportunities for librarians.
Vehicle-based vision sensors for intelligent highway systems
NASA Astrophysics Data System (ADS)
Masaki, Ichiro
1989-09-01
This paper describes a vision system, based on ASIC (Application Specific Integrated Circuit) approach, for vehicle guidance on highways. After reviewing related work in the fields of intelligent vehicles, stereo vision, and ASIC-based approaches, the paper focuses on a stereo vision system for intelligent cruise control. The system measures the distance to the vehicle in front using trinocular triangulation. An application specific processor architecture was developed to offer low mass-production cost, real-time operation, low power consumption, and small physical size. The system was installed in the trunk of a car and evaluated successfully on highways.
Remote Marker-Based Tracking for UAV Landing Using Visible-Light Camera Sensor.
Nguyen, Phong Ha; Kim, Ki Wan; Lee, Young Won; Park, Kang Ryoung
2017-08-30
Unmanned aerial vehicles (UAVs), which are commonly known as drones, have proved to be useful not only on the battlefields where manned flight is considered too risky or difficult, but also in everyday life purposes such as surveillance, monitoring, rescue, unmanned cargo, aerial video, and photography. More advanced drones make use of global positioning system (GPS) receivers during the navigation and control loop which allows for smart GPS features of drone navigation. However, there are problems if the drones operate in heterogeneous areas with no GPS signal, so it is important to perform research into the development of UAVs with autonomous navigation and landing guidance using computer vision. In this research, we determined how to safely land a drone in the absence of GPS signals using our remote maker-based tracking algorithm based on the visible light camera sensor. The proposed method uses a unique marker designed as a tracking target during landing procedures. Experimental results show that our method significantly outperforms state-of-the-art object trackers in terms of both accuracy and processing time, and we perform test on an embedded system in various environments.
Skrzat, Janusz; Sioma, Andrzej; Kozerska, Magdalena
2013-01-01
In this paper we present potential usage of the 3D vision system for registering features of the macerated cranial bones. Applied 3D vision system collects height profiles of the object surface and from that data builds a three-dimensional image of the surface. This method appeared to be accurate enough to capture anatomical details of the macerated bones. With the aid of the 3D vision system we generated images of the surface of the human calvaria which was used for testing the system. Performed reconstruction visualized the imprints of the dural vascular system, cranial sutures, and the three-layer structure of the cranial bones observed in the cross-section. We figure out that the 3D vision system may deliver data which can enhance estimation of sex from the osteological material.
NASA Astrophysics Data System (ADS)
Kambe, Hidetoshi; Mitsui, Hiroyasu; Endo, Satoshi; Koizumi, Hisao
The applications of embedded system technologies have spread widely in various products, such as home appliances, cellular phones, automobiles, industrial machines and so on. Due to intensified competition, embedded software has expanded its role in realizing sophisticated functions, and new development methods like a hardware/software (HW/SW) co-design for uniting HW and SW development have been researched. The shortfall of embedded SW engineers was estimated to be approximately 99,000 in the year 2006, in Japan. Embedded SW engineers should understand HW technologies and system architecture design as well as SW technologies. However, a few universities offer this kind of education systematically. We propose a student experiment method for learning the basics of embedded system development, which includes a set of experiments for developing embedded SW, developing embedded HW and experiencing HW/SW co-design. The co-design experiment helps students learn about the basics of embedded system architecture design and the flow of designing actual HW and SW modules. We developed these experiments and evaluated them.
Machine vision systems using machine learning for industrial product inspection
NASA Astrophysics Data System (ADS)
Lu, Yi; Chen, Tie Q.; Chen, Jie; Zhang, Jian; Tisler, Anthony
2002-02-01
Machine vision inspection requires efficient processing time and accurate results. In this paper, we present a machine vision inspection architecture, SMV (Smart Machine Vision). SMV decomposes a machine vision inspection problem into two stages, Learning Inspection Features (LIF), and On-Line Inspection (OLI). The LIF is designed to learn visual inspection features from design data and/or from inspection products. During the OLI stage, the inspection system uses the knowledge learnt by the LIF component to inspect the visual features of products. In this paper we will present two machine vision inspection systems developed under the SMV architecture for two different types of products, Printed Circuit Board (PCB) and Vacuum Florescent Displaying (VFD) boards. In the VFD board inspection system, the LIF component learns inspection features from a VFD board and its displaying patterns. In the PCB board inspection system, the LIF learns the inspection features from the CAD file of a PCB board. In both systems, the LIF component also incorporates interactive learning to make the inspection system more powerful and efficient. The VFD system has been deployed successfully in three different manufacturing companies and the PCB inspection system is the process of being deployed in a manufacturing plant.
NICMOS CAPTURES THE HEART OF OMC-1
NASA Technical Reports Server (NTRS)
2002-01-01
The infrared vision of the Hubble Space Telescope's Near Infrared Camera and Multi-Object Spectrometer (NICMOS) is providing a dramatic new look at the beautiful Orion Nebula which contains the nearest nursery for massive stars. For comparison, Hubble's Wide Field and Planetary Camera 2 (WFPC2) image on the left shows a large part of the nebula as it appears in visible light. The heart of the giant Orion molecular cloud, OMC-1, is included in the relatively dim and featureless area inside the blue outline near the top of the image. Light from a few foreground stars seen in the WFPC2 image provides only a hint of the many other stars embedded in this dense cloud. NICMOS's infrared vision reveals a chaotic, active star birth region (as seen in the right-hand image). Here, stars and glowing interstellar dust, heated by and scattering the intense starlight, appear yellow-orange. Emission by excited hydrogen molecules appears blue. The image is oriented with north up and east to the left. The diagonal extent of the image is about 0.4 light-years. Some details are as small as the size of our solar system. The brightest object in the image is a massive young star called BN (Becklin-Neugebauer). Blue 'fingers' of molecular hydrogen emission indicate the presence of violent outflows, probably produced by a young star or stars still embedded in dust (located to the lower left, southeast, of BN). The outflowing material may also produce the crescent-shaped 'bow shock' on the edge of a dark feature north of BN and the two bright 'arcs' south of BN. The detection of several sets of closely spaced double stars in these observations further demonstrates NICMOS's ability to see fine details not possible from ground-based telescopes. Credits: NICMOS image -- Rodger Thompson, Marcia Rieke, Glenn Schneider, Susan Stolovy (University of Arizona); Edwin Erickson (SETI Institute/Ames Research Center); David Axon (STScI); and NASA WFPC2 image -- C. Robert O'Dell, Shui Kwan Wong (Rice University) and NASA Image files in GIF and JPEG format and captions may be accessed on the Internet via anonymous ftp from ftp.stsci.edu in /pubinfo.
Application of aircraft navigation sensors to enhanced vision systems
NASA Technical Reports Server (NTRS)
Sweet, Barbara T.
1993-01-01
In this presentation, the applicability of various aircraft navigation sensors to enhanced vision system design is discussed. First, the accuracy requirements of the FAA for precision landing systems are presented, followed by the current navigation systems and their characteristics. These systems include Instrument Landing System (ILS), Microwave Landing System (MLS), Inertial Navigation, Altimetry, and Global Positioning System (GPS). Finally, the use of navigation system data to improve enhanced vision systems is discussed. These applications include radar image rectification, motion compensation, and image registration.
FLORA™: Phase I development of a functional vision assessment for prosthetic vision users
Geruschat, Duane R; Flax, Marshall; Tanna, Nilima; Bianchi, Michelle; Fisher, Andy; Goldschmidt, Mira; Fisher, Lynne; Dagnelie, Gislin; Deremeik, Jim; Smith, Audrey; Anaflous, Fatima; Dorn, Jessy
2014-01-01
Background Research groups and funding agencies need a functional assessment suitable for an ultra-low vision population in order to evaluate the impact of new vision restoration treatments. The purpose of this study was to develop a pilot assessment to capture the functional vision ability and well-being of subjects whose vision has been partially restored with the Argus II Retinal Prosthesis System. Methods The Functional Low-Vision Observer Rated Assessment (FLORA) pilot assessment involved a self-report section, a list of functional vision tasks for observation of performance, and a case narrative summary. Results were analyzed to determine whether the interview questions and functional vision tasks were appropriate for this ultra-low vision population and whether the ratings suffered from floor or ceiling effects. Thirty subjects with severe to profound retinitis pigmentosa (bare light perception or worse in both eyes) were enrolled in a clinical trial and implanted with the Argus II System. From this population, twenty-six subjects were assessed with the FLORA. Seven different evaluators administered the assessment. Results All 14 interview questions were asked. All 35 functional vision tasks were selected for evaluation at least once, with an average of 20 subjects being evaluated for each test item. All four rating options -- impossible (33%), difficult (23%), moderate (24%) and easy (19%) -- were used by the evaluators. Evaluators also judged the amount of vision they observed the subjects using to complete the various tasks, with vision only occurring 75% on average with the System ON, and 29% with the System OFF. Conclusion The first version of the FLORA was found to contain useful elements for evaluation and to avoid floor and ceiling effects. The next phase of development will be to refine the assessment and to establish reliability and validity to increase its value as a functional vision and well-being assessment tool. PMID:25675964
Night vision: changing the way we drive
NASA Astrophysics Data System (ADS)
Klapper, Stuart H.; Kyle, Robert J. S.; Nicklin, Robert L.; Kormos, Alexander L.
2001-03-01
A revolutionary new Night Vision System has been designed to help drivers see well beyond their headlights. From luxury automobiles to heavy trucks, Night Vision is helping drivers see better, see further, and react sooner. This paper describes how Night Vision Systems are being used in transportation and their viability for the future. It describes recent improvements to the system currently in the second year of production. It also addresses consumer education and awareness, cost reduction, product reliability, market expansion and future improvements.
NASA Astrophysics Data System (ADS)
Kyrkou, Christos; Theocharides, Theocharis
2016-07-01
Object detection is a major step in several computer vision applications and a requirement for most smart camera systems. Recent advances in hardware acceleration for real-time object detection feature extensive use of reconfigurable hardware [field programmable gate arrays (FPGAs)], and relevant research has produced quite fascinating results, in both the accuracy of the detection algorithms as well as the performance in terms of frames per second (fps) for use in embedded smart camera systems. Detecting objects in images, however, is a daunting task and often involves hardware-inefficient steps, both in terms of the datapath design and in terms of input/output and memory access patterns. We present how a visual-feature-directed search cascade composed of motion detection, depth computation, and edge detection, can have a significant impact in reducing the data that needs to be examined by the classification engine for the presence of an object of interest. Experimental results on a Spartan 6 FPGA platform for face detection indicate data search reduction of up to 95%, which results in the system being able to process up to 50 1024×768 pixels images per second with a significantly reduced number of false positives.
Dielectric elastomer vibrissal system for active tactile sensing
NASA Astrophysics Data System (ADS)
Conn, Andrew T.; Pearson, Martin J.; Pipe, Anthony G.; Welsby, Jason; Rossiter, Jonathan
2012-04-01
Rodents are able to dexterously navigate confined and unlit environments by extracting spatial and textural information with their whiskers (or vibrissae). Vibrissal-based active touch is suited to a variety of applications where vision is occluded, such as search-and-rescue operations in collapsed buildings. In this paper, a compact dielectric elastomer vibrissal system (DEVS) is described that mimics the vibrissal follicle-sinus complex (FSC) found in rodents. Like the vibrissal FSC, the DEVS encapsulates all sensitive mechanoreceptors at the root of a passive whisker within an antagonistic muscular system. Typically, rats actively whisk arrays of macro-vibrissae with amplitudes of up to +/-25°. It is demonstrated that these properties can be replicated by exploiting the characteristic large actuation strains and passive compliance of dielectric elastomers. A prototype DEVS is developed using VHB 4905 and embedded strain gauges bonded to the root of a tapered whisker. The DEVS is demonstrated to produce a maximum rotational output of +/-22.8°. An electro-mechanical model of the DEVS is derived, which incorporates a hyperelastic material model and Euler- Bernoulli beam equations. The model is shown to predict experimental measurements of whisking stroke amplitude and whisker deflection.
A Machine Vision System for Automatically Grading Hardwood Lumber - (Proceedings)
Richard W. Conners; Tai-Hoon Cho; Chong T. Ng; Thomas H. Drayer; Joe G. Tront; Philip A. Araman; Robert L. Brisbon
1990-01-01
Any automatic system for grading hardwood lumber can conceptually be divided into two components. One of these is a machine vision system for locating and identifying grading defects. The other is an automatic grading program that accepts as input the output of the machine vision system and, based on these data, determines the grade of a board. The progress that has...
Heterogeneous Embedded Real-Time Systems Environment
2003-12-01
AFRL-IF-RS-TR-2003-290 Final Technical Report December 2003 HETEROGENEOUS EMBEDDED REAL - TIME SYSTEMS ENVIRONMENT Integrated...HETEROGENEOUS EMBEDDED REAL - TIME SYSTEMS ENVIRONMENT 6. AUTHOR(S) Cosmo Castellano and James Graham 5. FUNDING NUMBERS C - F30602-97-C-0259
Machine Vision Systems for Processing Hardwood Lumber and Logs
Philip A. Araman; Daniel L. Schmoldt; Tai-Hoon Cho; Dongping Zhu; Richard W. Conners; D. Earl Kline
1992-01-01
Machine vision and automated processing systems are under development at Virginia Tech University with support and cooperation from the USDA Forest Service. Our goals are to help U.S. hardwood producers automate, reduce costs, increase product volume and value recovery, and market higher value, more accurately graded and described products. Any vision system is...
Machine vision system for inspecting characteristics of hybrid rice seed
NASA Astrophysics Data System (ADS)
Cheng, Fang; Ying, Yibin
2004-03-01
Obtaining clear images advantaged of improving the classification accuracy involves many factors, light source, lens extender and background were discussed in this paper. The analysis of rice seed reflectance curves showed that the wavelength of light source for discrimination of the diseased seeds from normal rice seeds in the monochromic image recognition mode was about 815nm for jinyou402 and shanyou10. To determine optimizing conditions for acquiring digital images of rice seed using a computer vision system, an adjustable color machine vision system was developed. The machine vision system with 20mm to 25mm lens extender produce close-up images which made it easy to object recognition of characteristics in hybrid rice seeds. White background was proved to be better than black background for inspecting rice seeds infected by disease and using the algorithms based on shape. Experimental results indicated good classification for most of the characteristics with the machine vision system. The same algorithm yielded better results in optimizing condition for quality inspection of rice seed. Specifically, the image processing can correct for details such as fine fissure with the machine vision system.
Vision Algorithms to Determine Shape and Distance for Manipulation of Unmodeled Objects
NASA Technical Reports Server (NTRS)
Montes, Leticia; Bowers, David; Lumia, Ron
1998-01-01
This paper discusses the development of a robotic system for general use in an unstructured environment. This is illustrated through pick and place of randomly positioned, un-modeled objects. There are many applications for this project, including rock collection for the Mars Surveyor Program. This system is demonstrated with a Puma560 robot, Barrett hand, Cognex vision system, and Cimetrix simulation and control, all running on a PC. The demonstration consists of two processes: vision system and robotics. The vision system determines the size and location of the unknown objects. The robotics part consists of moving the robot to the object, configuring the hand based on the information from the vision system, then performing the pick/place operation. This work enhances and is a part of the Low Cost Virtual Collaborative Environment which provides remote simulation and control of equipment.
Supervised linear dimensionality reduction with robust margins for object recognition
NASA Astrophysics Data System (ADS)
Dornaika, F.; Assoum, A.
2013-01-01
Linear Dimensionality Reduction (LDR) techniques have been increasingly important in computer vision and pattern recognition since they permit a relatively simple mapping of data onto a lower dimensional subspace, leading to simple and computationally efficient classification strategies. Recently, many linear discriminant methods have been developed in order to reduce the dimensionality of visual data and to enhance the discrimination between different groups or classes. Many existing linear embedding techniques relied on the use of local margins in order to get a good discrimination performance. However, dealing with outliers and within-class diversity has not been addressed by margin-based embedding method. In this paper, we explored the use of different margin-based linear embedding methods. More precisely, we propose to use the concepts of Median miss and Median hit for building robust margin-based criteria. Based on such margins, we seek the projection directions (linear embedding) such that the sum of local margins is maximized. Our proposed approach has been applied to the problem of appearance-based face recognition. Experiments performed on four public face databases show that the proposed approach can give better generalization performance than the classic Average Neighborhood Margin Maximization (ANMM). Moreover, thanks to the use of robust margins, the proposed method down-grades gracefully when label outliers contaminate the training data set. In particular, we show that the concept of Median hit was crucial in order to get robust performance in the presence of outliers.
Street Viewer: An Autonomous Vision Based Traffic Tracking System.
Bottino, Andrea; Garbo, Alessandro; Loiacono, Carmelo; Quer, Stefano
2016-06-03
The development of intelligent transportation systems requires the availability of both accurate traffic information in real time and a cost-effective solution. In this paper, we describe Street Viewer, a system capable of analyzing the traffic behavior in different scenarios from images taken with an off-the-shelf optical camera. Street Viewer operates in real time on embedded hardware architectures with limited computational resources. The system features a pipelined architecture that, on one side, allows one to exploit multi-threading intensively and, on the other side, allows one to improve the overall accuracy and robustness of the system, since each layer is aimed at refining for the following layers the information it receives as input. Another relevant feature of our approach is that it is self-adaptive. During an initial setup, the application runs in learning mode to build a model of the flow patterns in the observed area. Once the model is stable, the system switches to the on-line mode where the flow model is used to count vehicles traveling on each lane and to produce a traffic information summary. If changes in the flow model are detected, the system switches back autonomously to the learning mode. The accuracy and the robustness of the system are analyzed in the paper through experimental results obtained on several different scenarios and running the system for long periods of time.
An Incremental Life-cycle Assurance Strategy for Critical System Certification
2014-11-04
for Safe Aircraft Operation Embedded software systems introduce a new class of problems not addressed by traditional system modeling & analysis...Platform Runtime Architecture Application Software Embedded SW System Engineer Data Stream Characteristics Latency jitter affects control behavior...do system level failures still occur despite fault tolerance techniques being deployed in systems ? Embedded software system as major source of
AN INVESTIGATION OF VISION PROBLEMS AND THE VISION CARE SYSTEM IN RURAL CHINA.
Bai, Yunli; Yi, Hongmei; Zhang, Linxiu; Shi, Yaojiang; Ma, Xiaochen; Congdon, Nathan; Zhou, Zhongqiang; Boswell, Matthew; Rozelle, Scott
2014-11-01
This paper examines the prevalence of vision problems and the accessibility to and quality of vision care in rural China. We obtained data from 4 sources: 1) the National Rural Vision Care Survey; 2) the Private Optometrists Survey; 3) the County Hospital Eye Care Survey; and 4) the Rural School Vision Care Survey. The data from each of the surveys were collected by the authors during 2012. Thirty-three percent of the rural population surveyed self-reported vision problems. Twenty-two percent of subjects surveyed had ever had a vision exam. Among those who self-reported having vision problems, 34% did not wear eyeglasses. Fifty-four percent of those with vision problems who had eyeglasses did not have a vision exam prior to receiving glasses. However, having a vision exam did not always guarantee access to quality vision care. Four channels of vision care service were assessed. The school vision examination program did not increase the usage rate of eyeglasses. Each county-hospital was staffed with three eye-doctors having one year of education beyond high school, serving more than 400,000 residents. Private optometrists often had low levels of education and professional certification. In conclusion, our findings shows that the vision care system in rural China is inadequate and ineffective in meeting the needs of the rural population sampled.
The Clear Creek Envirohydrologic Observatory: From Vision Toward Reality
NASA Astrophysics Data System (ADS)
Just, C.; Muste, M.; Kruger, A.
2007-12-01
As the vision of a fully-functional Clear Creek Envirohydrologic Observatory comes closer to reality, the opportunities for significant watershed science advances in the near future become more apparent. As a starting point to approaching this vision, we focused on creating a working example of cyberinfrastructure in the hydrologic and environmental sciences. The system will integrate a broad range of technologies and ideas: wired and wireless sensors, low power wireless communication, embedded microcontrollers, commodity cellular networks, the internet, unattended quality assurance, metadata, relational databases, machine-to-machine communication, interfaces to hydrologic and environmental models, feedback, and external inputs. Hardware: An accomplishment to date is "in-house" developed sensor networking electronics to compliment commercially available communications. The first of these networkable sensors are dielectric soil moisture probes that are arrayed and equipped with wireless connectivity for communications. Commercially available data logging and telemetry-enabled systems deployed at the Clear Creek testbed include a Campbell Scientific CR1000 datalogger, a Redwing 100 cellular modem, a YA Series yagi antenna, a NP12 rechargeable battery, and a BP SX20U solar panel. This networking equipment has been coupled with Hach DS5X water quality sondes, DTS-12 turbidity probes and MicroLAB nutrient analyzers. Software: Our existing data model is an Arc Hydro-based geodatabase customized with applications for extraction and population of the database with third party data. The following third party data are acquired automatically and in real time into the Arc Hydro customized database: 1) geophysical data: 10m DEM and soil grids, soils; 2) land use/land cover data; and 3) eco-hydrological: radar-based rainfall estimates, stream gage, streamlines, and water quality data. A new processing software for data analysis of Acoustic Doppler Current Profilers (ADCP) measurements has been finalized. The software package provides mean flow field and turbulence characteristics obtained by operating the ADCP at fixed points or using the moving-boat approach. Current Work: The current development work is focused on extracting and populating the Clear Creek database with in-situ measurements acquired and transmitted in real time with sensors deployed in the Clear Creek watershed.
Flight Test Comparison Between Enhanced Vision (FLIR) and Synthetic Vision Systems
NASA Technical Reports Server (NTRS)
Arthur, Jarvis J., III; Kramer, Lynda J.; Bailey, Randall E.
2005-01-01
Limited visibility and reduced situational awareness have been cited as predominant causal factors for both Controlled Flight Into Terrain (CFIT) and runway incursion accidents. NASA s Synthetic Vision Systems (SVS) project is developing practical application technologies with the goal of eliminating low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance. A flight test evaluation was conducted in the summer of 2004 by NASA Langley Research Center under NASA s Aviation Safety and Security, Synthetic Vision System - Commercial and Business program. A Gulfstream G-V aircraft, modified and operated under NASA contract by the Gulfstream Aerospace Corporation, was flown over a 3-week period at the Reno/Tahoe International Airport and an additional 3-week period at the NASA Wallops Flight Facility to evaluate integrated Synthetic Vision System concepts. Flight testing was conducted to evaluate the performance, usability, and acceptance of an integrated synthetic vision concept which included advanced Synthetic Vision display concepts for a transport aircraft flight deck, a Runway Incursion Prevention System, an Enhanced Vision Systems (EVS), and real-time Database Integrity Monitoring Equipment. This paper focuses on comparing qualitative and subjective results between EVS and SVS display concepts.
A Machine Vision System for Automatically Grading Hardwood Lumber - (Industrial Metrology)
Richard W. Conners; Tai-Hoon Cho; Chong T. Ng; Thomas T. Drayer; Philip A. Araman; Robert L. Brisbon
1992-01-01
Any automatic system for grading hardwood lumber can conceptually be divided into two components. One of these is a machine vision system for locating and identifying grading defects. The other is an automatic grading program that accepts as input the output of the machine vision system and, based on these data, determines the grade of a board. The progress that has...
Machine Vision Giving Eyes to Robots. Resources in Technology.
ERIC Educational Resources Information Center
Technology Teacher, 1990
1990-01-01
This module introduces machine vision, which can be used for inspection, robot guidance and part sorting. The future for machine vision will include new technology and will bring vision systems closer to the ultimate vision processor, the human eye. Includes a student quiz, outcomes, and activities. (JOW)
3-D Signal Processing in a Computer Vision System
Dongping Zhu; Richard W. Conners; Philip A. Araman
1991-01-01
This paper discusses the problem of 3-dimensional image filtering in a computer vision system that would locate and identify internal structural failure. In particular, a 2-dimensional adaptive filter proposed by Unser has been extended to 3-dimension. In conjunction with segmentation and labeling, the new filter has been used in the computer vision system to...
Intensity measurement of automotive headlamps using a photometric vision system
NASA Astrophysics Data System (ADS)
Patel, Balvant; Cruz, Jose; Perry, David L.; Himebaugh, Frederic G.
1996-01-01
Requirements for automotive head lamp luminous intensity tests are introduced. The rationale for developing a non-goniometric photometric test system is discussed. The design of the Ford photometric vision system (FPVS) is presented, including hardware, software, calibration, and system use. Directional intensity plots and regulatory test results obtained from the system are compared to corresponding results obtained from a Ford goniometric test system. Sources of error for the vision system and goniometer are discussed. Directions for new work are identified.
Varsano, Daniele; Caprasecca, Stefano; Coccia, Emanuele
2017-01-11
Photoinitiated phenomena play a crucial role in many living organisms. Plants, algae, and bacteria absorb sunlight to perform photosynthesis, and convert water and carbon dioxide into molecular oxygen and carbohydrates, thus forming the basis for life on Earth. The vision of vertebrates is accomplished in the eye by a protein called rhodopsin, which upon photon absorption performs an ultrafast isomerisation of the retinal chromophore, triggering the signal cascade. Many other biological functions start with the photoexcitation of a protein-embedded pigment, followed by complex processes comprising, for example, electron or excitation energy transfer in photosynthetic complexes. The optical properties of chromophores in living systems are strongly dependent on the interaction with the surrounding environment (nearby protein residues, membrane, water), and the complexity of such interplay is, in most cases, at the origin of the functional diversity of the photoactive proteins. The specific interactions with the environment often lead to a significant shift of the chromophore excitation energies, compared with their absorption in solution or gas phase. The investigation of the optical response of chromophores is generally not straightforward, from both experimental and theoretical standpoints; this is due to the difficulty in understanding diverse behaviours and effects, occurring at different scales, with a single technique. In particular, the role played by ab initio calculations in assisting and guiding experiments, as well as in understanding the physics of photoactive proteins, is fundamental. At the same time, owing to the large size of the systems, more approximate strategies which take into account the environmental effects on the absorption spectra are also of paramount importance. Here we review the recent advances in the first-principle description of electronic and optical properties of biological chromophores embedded in a protein environment. We show their applications on paradigmatic systems, such as the light-harvesting complexes, rhodopsin and green fluorescent protein, emphasising the theoretical frameworks which are of common use in solid state physics, and emerging as promising tools for biomolecular systems.
NASA Astrophysics Data System (ADS)
Varsano, Daniele; Caprasecca, Stefano; Coccia, Emanuele
2017-01-01
Photoinitiated phenomena play a crucial role in many living organisms. Plants, algae, and bacteria absorb sunlight to perform photosynthesis, and convert water and carbon dioxide into molecular oxygen and carbohydrates, thus forming the basis for life on Earth. The vision of vertebrates is accomplished in the eye by a protein called rhodopsin, which upon photon absorption performs an ultrafast isomerisation of the retinal chromophore, triggering the signal cascade. Many other biological functions start with the photoexcitation of a protein-embedded pigment, followed by complex processes comprising, for example, electron or excitation energy transfer in photosynthetic complexes. The optical properties of chromophores in living systems are strongly dependent on the interaction with the surrounding environment (nearby protein residues, membrane, water), and the complexity of such interplay is, in most cases, at the origin of the functional diversity of the photoactive proteins. The specific interactions with the environment often lead to a significant shift of the chromophore excitation energies, compared with their absorption in solution or gas phase. The investigation of the optical response of chromophores is generally not straightforward, from both experimental and theoretical standpoints; this is due to the difficulty in understanding diverse behaviours and effects, occurring at different scales, with a single technique. In particular, the role played by ab initio calculations in assisting and guiding experiments, as well as in understanding the physics of photoactive proteins, is fundamental. At the same time, owing to the large size of the systems, more approximate strategies which take into account the environmental effects on the absorption spectra are also of paramount importance. Here we review the recent advances in the first-principle description of electronic and optical properties of biological chromophores embedded in a protein environment. We show their applications on paradigmatic systems, such as the light-harvesting complexes, rhodopsin and green fluorescent protein, emphasising the theoretical frameworks which are of common use in solid state physics, and emerging as promising tools for biomolecular systems.
Can colours be used to segment words when reading?
Perea, Manuel; Tejero, Pilar; Winskel, Heather
2015-07-01
Rayner, Fischer, and Pollatsek (1998, Vision Research) demonstrated that reading unspaced text in Indo-European languages produces a substantial reading cost in word identification (as deduced from an increased word-frequency effect on target words embedded in the unspaced vs. spaced sentences) and in eye movement guidance (as deduced from landing sites closer to the beginning of the words in unspaced sentences). However, the addition of spaces between words comes with a cost: nearby words may fall outside high-acuity central vision, thus reducing the potential benefits of parafoveal processing. In the present experiment, we introduced a salient visual cue intended to facilitate the process of word segmentation without compromising visual acuity: each alternating word was printed in a different colour (i.e., ). Results only revealed a small reading cost of unspaced alternating colour sentences relative to the spaced sentences. Thus, present data are a demonstration that colour can be useful to segment words for readers of spaced orthographies. Copyright © 2015 Elsevier B.V. All rights reserved.
Goavec-Mérou, G; Chrétien, N; Friedt, J-M; Sandoz, P; Martin, G; Lenczner, M; Ballandras, S
2014-01-01
Vibrating mechanical structure characterization is demonstrated using contactless techniques best suited for mobile and rotating equipments. Fast measurement rates are achieved using Field Programmable Gate Array (FPGA) devices as real-time digital signal processors. Two kinds of algorithms are implemented on FPGA and experimentally validated in the case of the vibrating tuning fork. A first application concerns in-plane displacement detection by vision with sampling rates above 10 kHz, thus reaching frequency ranges above the audio range. A second demonstration concerns pulsed-RADAR cooperative target phase detection and is applied to radiofrequency acoustic transducers used as passive wireless strain gauges. In this case, the 250 ksamples/s refresh rate achieved is only limited by the acoustic sensor design but not by the detection bandwidth. These realizations illustrate the efficiency, interest, and potentialities of FPGA-based real-time digital signal processing for the contactless interrogation of passive embedded probes with high refresh rates.
AADL and Model-based Engineering
2014-10-20
and MBE Feiler, Oct 20, 2014 © 2014 Carnegie Mellon University We Rely on Software for Safe Aircraft Operation Embedded software systems ...D eveloper Compute Platform Runtime Architecture Application Software Embedded SW System Engineer Data Stream Characteristics Latency...confusion Hardware Engineer Why do system level failures still occur despite fault tolerance techniques being deployed in systems ? Embedded software
The study of stereo vision technique for the autonomous vehicle
NASA Astrophysics Data System (ADS)
Li, Pei; Wang, Xi; Wang, Jiang-feng
2015-08-01
The stereo vision technology by two or more cameras could recovery 3D information of the field of view. This technology can effectively help the autonomous navigation system of unmanned vehicle to judge the pavement conditions within the field of view, and to measure the obstacles on the road. In this paper, the stereo vision technology in measuring the avoidance of the autonomous vehicle is studied and the key techniques are analyzed and discussed. The system hardware of the system is built and the software is debugged, and finally the measurement effect is explained by the measured data. Experiments show that the 3D reconstruction, within the field of view, can be rebuilt by the stereo vision technology effectively, and provide the basis for pavement condition judgment. Compared with unmanned vehicle navigation radar used in measuring system, the stereo vision system has the advantages of low cost, distance and so on, it has a good application prospect.
NASA Astrophysics Data System (ADS)
Zhang, Shuo; Liu, Shaochuang; Ma, Youqing; Qi, Chen; Ma, Hao; Yang, Huan
2017-06-01
The Chang'e-3 was the first lunar soft landing probe of China. It was composed of the lander and the lunar rover. The Chang'e-3 successful landed in the northwest of the Mare Imbrium in December 14, 2013. The lunar rover completed the movement, imaging and geological survey after landing. The lunar rover equipped with a stereo vision system which was made up of the Navcam system, the mast mechanism and the inertial measurement unit (IMU). The Navcam system composed of two cameras with the fixed focal length. The mast mechanism was a robot with three revolute joints. The stereo vision system was used to determine the position of the lunar rover, generate the digital elevation models (DEM) of the surrounding region and plan the moving paths of the lunar rover. The stereo vision system must be calibrated before use. The control field could be built to calibrate the stereo vision system in the laboratory on the earth. However, the parameters of the stereo vision system would change after the launch, the orbital changes, the braking and the landing. Therefore, the stereo vision system should be self calibrated on the moon. An integrated self calibration method based on the bundle block adjustment is proposed in this paper. The bundle block adjustment uses each bundle of ray as the basic adjustment unit and the adjustment is implemented in the whole photogrammetric region. The stereo vision system can be self calibrated with the proposed method under the unknown lunar environment and all parameters can be estimated simultaneously. The experiment was conducted in the ground lunar simulation field. The proposed method was compared with other methods such as the CAHVOR method, the vanishing point method, the Denavit-Hartenberg method, the factorization method and the weighted least-squares method. The analyzed result proved that the accuracy of the proposed method was superior to those of other methods. Finally, the proposed method was practical used to self calibrate the stereo vision system of the Chang'e-3 lunar rover on the moon.
NASA Astrophysics Data System (ADS)
Kim, Hie-Sik; Nam, Chul; Ha, Kwan-Yong; Ayurzana, Odgeral; Kwon, Jong-Won
2005-12-01
The embedded systems have been applied to many fields, including households and industrial sites. The user interface technology with simple display on the screen was implemented more and more. The user demands are increasing and the system has more various applicable fields due to a high penetration rate of the Internet. Therefore, the demand for embedded system is tend to rise. An embedded system for image tracking was implemented. This system is used a fixed IP for the reliable server operation on TCP/IP networks. Using an USB camera on the embedded Linux system developed a real time broadcasting of video image on the Internet. The digital camera is connected at the USB host port of the embedded board. All input images from the video camera are continuously stored as a compressed JPEG file in a directory at the Linux web-server. And each frame image data from web camera is compared for measurement of displacement Vector. That used Block matching algorithm and edge detection algorithm for past speed. And the displacement vector is used at pan/tilt motor control through RS232 serial cable. The embedded board utilized the S3C2410 MPU, which used the ARM 920T core form Samsung. The operating system was ported to embedded Linux kernel and mounted of root file system. And the stored images are sent to the client PC through the web browser. It used the network function of Linux and it developed a program with protocol of the TCP/IP.
Heartbeat-based error diagnosis framework for distributed embedded systems
NASA Astrophysics Data System (ADS)
Mishra, Swagat; Khilar, Pabitra Mohan
2012-01-01
Distributed Embedded Systems have significant applications in automobile industry as steer-by-wire, fly-by-wire and brake-by-wire systems. In this paper, we provide a general framework for fault detection in a distributed embedded real time system. We use heartbeat monitoring, check pointing and model based redundancy to design a scalable framework that takes care of task scheduling, temperature control and diagnosis of faulty nodes in a distributed embedded system. This helps in diagnosis and shutting down of faulty actuators before the system becomes unsafe. The framework is designed and tested using a new simulation model consisting of virtual nodes working on a message passing system.
Heartbeat-based error diagnosis framework for distributed embedded systems
NASA Astrophysics Data System (ADS)
Mishra, Swagat; Khilar, Pabitra Mohan
2011-12-01
Distributed Embedded Systems have significant applications in automobile industry as steer-by-wire, fly-by-wire and brake-by-wire systems. In this paper, we provide a general framework for fault detection in a distributed embedded real time system. We use heartbeat monitoring, check pointing and model based redundancy to design a scalable framework that takes care of task scheduling, temperature control and diagnosis of faulty nodes in a distributed embedded system. This helps in diagnosis and shutting down of faulty actuators before the system becomes unsafe. The framework is designed and tested using a new simulation model consisting of virtual nodes working on a message passing system.
Feasibility Study of a Vision-Based Landing System for Unmanned Fixed-Wing Aircraft
2017-06-01
International Journal of Computer Science and Network Security 7 no. 3: 112–117. Accessed April 7, 2017. http://www.sciencedirect.com/science/ article /pii...the feasibility of applying computer vision techniques and visual feedback in the control loop for an autonomous system. This thesis examines the...integration into an autonomous aircraft control system. 14. SUBJECT TERMS autonomous systems, auto-land, computer vision, image processing
Remote-controlled vision-guided mobile robot system
NASA Astrophysics Data System (ADS)
Ande, Raymond; Samu, Tayib; Hall, Ernest L.
1997-09-01
Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of the remote controlled emergency stop and vision systems for an autonomous mobile robot. The remote control provides human supervision and emergency stop capabilities for the autonomous vehicle. The vision guidance provides automatic operation. A mobile robot test-bed has been constructed using a golf cart base. The mobile robot (Bearcat) was built for the Association for Unmanned Vehicle Systems (AUVS) 1997 competition. The mobile robot has full speed control with guidance provided by a vision system and an obstacle avoidance system using ultrasonic sensors systems. Vision guidance is accomplished using two CCD cameras with zoom lenses. The vision data is processed by a high speed tracking device, communicating with the computer the X, Y coordinates of blobs along the lane markers. The system also has three emergency stop switches and a remote controlled emergency stop switch that can disable the traction motor and set the brake. Testing of these systems has been done in the lab as well as on an outside test track with positive results that show that at five mph the vehicle can follow a line and at the same time avoid obstacles.
Reconfigurable vision system for real-time applications
NASA Astrophysics Data System (ADS)
Torres-Huitzil, Cesar; Arias-Estrada, Miguel
2002-03-01
Recently, a growing community of researchers has used reconfigurable systems to solve computationally intensive problems. Reconfigurability provides optimized processors for systems on chip designs, and makes easy to import technology to a new system through reusable modules. The main objective of this work is the investigation of a reconfigurable computer system targeted for computer vision and real-time applications. The system is intended to circumvent the inherent computational load of most window-based computer vision algorithms. It aims to build a system for such tasks by providing an FPGA-based hardware architecture for task specific vision applications with enough processing power, using the minimum amount of hardware resources as possible, and a mechanism for building systems using this architecture. Regarding the software part of the system, a library of pre-designed and general-purpose modules that implement common window-based computer vision operations is being investigated. A common generic interface is established for these modules in order to define hardware/software components. These components can be interconnected to develop more complex applications, providing an efficient mechanism for transferring image and result data among modules. Some preliminary results are presented and discussed.
NASA Astrophysics Data System (ADS)
Demetriou, Demetris; Campagna, Michele; Racetin, Ivana; Konecny, Milan
2017-09-01
INSPIRE is the EU's authoritative Spatial Data Infrastructure (SDI) in which each Member State provides access to their spatial data across a wide spectrum of data themes to support policy making. In contrast, Volunteered Geographic Information (VGI) is one type of user-generated geographic information where volunteers use the web and mobile devices to create, assemble and disseminate spatial information. There are similarities and differences between SDIs and VGI initiatives, as well as advantages and disadvantages. Thus, the integration of these two data sources will enhance what is offered to end users to facilitate decision makers and the wider community regarding solving complex spatial problems, managing emergency situations and getting useful information for peoples' daily activities. Although some efforts towards this direction have been arisen, several key issues need to be considered and resolved. Further to this integration, the vision is the development of a global integrated GIS platform, which extends the capabilities of a typical data-hub by embedding on-line spatial and non-spatial applications, to deliver both static and dynamic outputs to support planning and decision making. In this context, this paper discusses the challenges of integrating INSPIRE with VGI and outlines a generic framework towards creating a global integrated web-based GIS platform. The tremendous high speed evolution of the Web and Geospatial technologies suggest that this "super" global Geo-system is not far away.
Integrated Environment for Development and Assurance
2015-01-26
Jan 26, 2015 © 2015 Carnegie Mellon University We Rely on Software for Safe Aircraft Operation Embedded software systems introduce a new class of...eveloper Compute Platform Runtime Architecture Application Software Embedded SW System Engineer Data Stream Characteristics Latency jitter affects...Why do system level failures still occur despite fault tolerance techniques being deployed in systems ? Embedded software system as major source of
Klancar, Gregor; Kristan, Matej; Kovacic, Stanislav; Orqueda, Omar
2004-07-01
In this paper a global vision scheme for estimation of positions and orientations of mobile robots is presented. It is applied to robot soccer application which is a fast dynamic game and therefore needs an efficient and robust vision system implemented. General applicability of the vision system can be found in other robot applications such as mobile transport robots in production, warehouses, attendant robots, fast vision tracking of targets of interest and entertainment robotics. Basic operation of the vision system is divided into two steps. In the first, the incoming image is scanned and pixels are classified into a finite number of classes. At the same time, a segmentation algorithm is used to find corresponding regions belonging to one of the classes. In the second step, all the regions are examined. Selection of the ones that are a part of the observed object is made by means of simple logic procedures. The novelty is focused on optimization of the processing time needed to finish the estimation of possible object positions. Better results of the vision system are achieved by implementing camera calibration and shading correction algorithm. The former corrects camera lens distortion, while the latter increases robustness to irregular illumination conditions.
Latency in Visionic Systems: Test Methods and Requirements
NASA Technical Reports Server (NTRS)
Bailey, Randall E.; Arthur, J. J., III; Williams, Steven P.; Kramer, Lynda J.
2005-01-01
A visionics device creates a pictorial representation of the external scene for the pilot. The ultimate objective of these systems may be to electronically generate a form of Visual Meteorological Conditions (VMC) to eliminate weather or time-of-day as an operational constraint and provide enhancement over actual visual conditions where eye-limiting resolution may be a limiting factor. Empirical evidence has shown that the total system delays or latencies including the imaging sensors and display systems, can critically degrade their utility, usability, and acceptability. Definitions and measurement techniques are offered herein as common test and evaluation methods for latency testing in visionics device applications. Based upon available data, very different latency requirements are indicated based upon the piloting task, the role in which the visionics device is used in this task, and the characteristics of the visionics cockpit display device including its resolution, field-of-regard, and field-of-view. The least stringent latency requirements will involve Head-Up Display (HUD) applications, where the visionics imagery provides situational information as a supplement to symbology guidance and command information. Conversely, the visionics system latency requirement for a large field-of-view Head-Worn Display application, providing a Virtual-VMC capability from which the pilot will derive visual guidance, will be the most stringent, having a value as low as 20 msec.
NASA Astrophysics Data System (ADS)
Chen, Yung-Sheng; Wang, Jeng-Yau
2015-09-01
Light source plays a significant role to acquire a qualified image from objects for facilitating the image processing and pattern recognition. For objects possessing specular surface, the phenomena of reflection and halo appearing in the acquired image will increase the difficulty of information processing. Such a situation may be improved by the assistance of valuable diffuse light source. Consider reading resistor via computer vision, due to the resistor's specular reflective surface it will face with a severe non-uniform luminous intensity on image yielding a higher error rate in recognition without a well-controlled light source. A measurement system including mainly a digital microscope embedded in a replaceable diffuse cover, a ring-type LED embedded onto a small pad carrying a resistor for evaluation, and Arduino microcontrollers connected with PC, is presented in this paper. Several replaceable cost-effective diffuse covers made by paper bowl, cup and box inside pasted with white paper are presented for reducing specular reflection and halo effects and compared with a commercial diffuse some. The ring-type LED can be flexibly configured to be a full or partial lighting based on the application. For each self-made diffuse cover, a set of resistors with 4 or 5 color bands are captured via digital microscope for experiments. The signal-to-noise ratio from the segmented resistor-image is used for performance evaluation. The detected principal axis of resistor body is used for the partial LED configuration to further improve the lighting condition. Experimental results confirm that the proposed mechanism can not only evaluate the cost-effective diffuse light source but also be extended as an automatic recognition system for resistor reading.
An embedded face-classification system for infrared images on an FPGA
NASA Astrophysics Data System (ADS)
Soto, Javier E.; Figueroa, Miguel
2014-10-01
We present a face-classification architecture for long-wave infrared (IR) images implemented on a Field Programmable Gate Array (FPGA). The circuit is fast, compact and low power, can recognize faces in real time and be embedded in a larger image-processing and computer vision system operating locally on an IR camera. The algorithm uses Local Binary Patterns (LBP) to perform feature extraction on each IR image. First, each pixel in the image is represented as an LBP pattern that encodes the similarity between the pixel and its neighbors. Uniform LBP codes are then used to reduce the number of patterns to 59 while preserving more than 90% of the information contained in the original LBP representation. Then, the image is divided into 64 non-overlapping regions, and each region is represented as a 59-bin histogram of patterns. Finally, the algorithm concatenates all 64 regions to create a 3,776-bin spatially enhanced histogram. We reduce the dimensionality of this histogram using Linear Discriminant Analysis (LDA), which improves clustering and enables us to store an entire database of 53 subjects on-chip. During classification, the circuit applies LBP and LDA to each incoming IR image in real time, and compares the resulting feature vector to each pattern stored in the local database using the Manhattan distance. We implemented the circuit on a Xilinx Artix-7 XC7A100T FPGA and tested it with the UCHThermalFace database, which consists of 28 81 x 150-pixel images of 53 subjects in indoor and outdoor conditions. The circuit achieves a 98.6% hit ratio, trained with 16 images and tested with 12 images of each subject in the database. Using a 100 MHz clock, the circuit classifies 8,230 images per second, and consumes only 309mW.
Design and implementation of non-linear image processing functions for CMOS image sensor
NASA Astrophysics Data System (ADS)
Musa, Purnawarman; Sudiro, Sunny A.; Wibowo, Eri P.; Harmanto, Suryadi; Paindavoine, Michel
2012-11-01
Today, solid state image sensors are used in many applications like in mobile phones, video surveillance systems, embedded medical imaging and industrial vision systems. These image sensors require the integration in the focal plane (or near the focal plane) of complex image processing algorithms. Such devices must meet the constraints related to the quality of acquired images, speed and performance of embedded processing, as well as low power consumption. To achieve these objectives, low-level analog processing allows extracting the useful information in the scene directly. For example, edge detection step followed by a local maxima extraction will facilitate the high-level processing like objects pattern recognition in a visual scene. Our goal was to design an intelligent image sensor prototype achieving high-speed image acquisition and non-linear image processing (like local minima and maxima calculations). For this purpose, we present in this article the design and test of a 64×64 pixels image sensor built in a standard CMOS Technology 0.35 μm including non-linear image processing. The architecture of our sensor, named nLiRIC (non-Linear Rapid Image Capture), is based on the implementation of an analog Minima/Maxima Unit. This MMU calculates the minimum and maximum values (non-linear functions), in real time, in a 2×2 pixels neighbourhood. Each MMU needs 52 transistors and the pitch of one pixel is 40×40 mu m. The total area of the 64×64 pixels is 12.5mm2. Our tests have shown the validity of the main functions of our new image sensor like fast image acquisition (10K frames per second), minima/maxima calculations in less then one ms.
Nakamura, Brad J; Mueller, Charles W; Higa-McMillan, Charmaine; Okamura, Kelsie H; Chang, Jaime P; Slavin, Lesley; Shimabukuro, Scott
2014-01-01
Hawaii's Child and Adolescent Mental Health Division provides a unique illustration of a youth public mental health system with a long and successful history of large-scale quality improvement initiatives. Many advances are linked to flexibly organizing and applying knowledge gained from the scientific literature and move beyond installing a limited number of brand-named treatment approaches that might be directly relevant only to a small handful of system youth. This article takes a knowledge-to-action perspective and outlines five knowledge management strategies currently under way in Hawaii. Each strategy represents one component of a larger coordinated effort at engineering a service system focused on delivering both brand-named treatment approaches and complimentary strategies informed by the evidence base. The five knowledge management examples are (a) a set of modular-based professional training activities for currently practicing therapists, (b) an outreach initiative for supporting youth evidence-based practices training at Hawaii's mental health-related professional programs, (c) an effort to increase consumer knowledge of and demand for youth evidence-based practices, (d) a practice and progress agency performance feedback system, and (e) a sampling of system-level research studies focused on understanding treatment as usual. We end by outlining a small set of lessons learned and a longer term vision for embedding these efforts into the system's infrastructure.
NASA Technical Reports Server (NTRS)
Boyer, K. L.; Wuescher, D. M.; Sarkar, S.
1991-01-01
Dynamic edge warping (DEW), a technique for recovering reasonably accurate disparity maps from uncalibrated stereo image pairs, is presented. No precise knowledge of the epipolar camera geometry is assumed. The technique is embedded in a system including structural stereopsis on the front end and robust estimation in digital photogrammetry on the other for the purpose of self-calibrating stereo image pairs. Once the relative camera orientation is known, the epipolar geometry is computed and the system can use this information to refine its representation of the object space. Such a system will find application in the autonomous extraction of terrain maps from stereo aerial photographs, for which camera position and orientation are unknown a priori, and for online autonomous calibration maintenance for robotic vision applications, in which the cameras are subject to vibration and other physical disturbances after calibration. This work thus forms a component of an intelligent system that begins with a pair of images and, having only vague knowledge of the conditions under which they were acquired, produces an accurate, dense, relative depth map. The resulting disparity map can also be used directly in some high-level applications involving qualitative scene analysis, spatial reasoning, and perceptual organization of the object space. The system as a whole substitutes high-level information and constraints for precise geometric knowledge in driving and constraining the early correspondence process.
Function-based design process for an intelligent ground vehicle vision system
NASA Astrophysics Data System (ADS)
Nagel, Robert L.; Perry, Kenneth L.; Stone, Robert B.; McAdams, Daniel A.
2010-10-01
An engineering design framework for an autonomous ground vehicle vision system is discussed. We present both the conceptual and physical design by following the design process, development and testing of an intelligent ground vehicle vision system constructed for the 2008 Intelligent Ground Vehicle Competition. During conceptual design, the requirements for the vision system are explored via functional and process analysis considering the flows into the vehicle and the transformations of those flows. The conceptual design phase concludes with a vision system design that is modular in both hardware and software and is based on a laser range finder and camera for visual perception. During physical design, prototypes are developed and tested independently, following the modular interfaces identified during conceptual design. Prototype models, once functional, are implemented into the final design. The final vision system design uses a ray-casting algorithm to process camera and laser range finder data and identify potential paths. The ray-casting algorithm is a single thread of the robot's multithreaded application. Other threads control motion, provide feedback, and process sensory data. Once integrated, both hardware and software testing are performed on the robot. We discuss the robot's performance and the lessons learned.
Parallel asynchronous systems and image processing algorithms
NASA Technical Reports Server (NTRS)
Coon, D. D.; Perera, A. G. U.
1989-01-01
A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.
Bubbling and on-off intermittency in bailout embeddings.
Cartwright, Julyan H E; Magnasco, Marcelo O; Piro, Oreste; Tuval, Idan
2003-07-01
We establish and investigate the conceptual connection between the dynamics of the bailout embedding of a Hamiltonian system and the dynamical regimes associated with the occurrence of bubbling and blowout bifurcations. The roles of the invariant manifold and the dynamics restricted to it, required in bubbling and blowout bifurcating systems, are played in the bailout embedding by the embedded Hamiltonian dynamical system. The Hamiltonian nature of the dynamics is precisely the distinctive feature of this instance of a bubbling or blowout bifurcation. The detachment of the embedding trajectories from the original ones can thus be thought of as transient on-off intermittency, and noise-induced avoidance of some regions of the embedded phase space can be recognized as Hamiltonian bubbling.
GSFC Information Systems Technology Developments Supporting the Vision for Space Exploration
NASA Technical Reports Server (NTRS)
Hughes, Peter; Dennehy, Cornelius; Mosier, Gary; Smith, Dan; Rykowski, Lisa
2004-01-01
The Vision for Space Exploration will guide NASA's future human and robotic space activities. The broad range of human and robotic missions now being planned will require the development of new system-level capabilities enabled by emerging new technologies. Goddard Space Flight Center is actively supporting the Vision for Space Exploration in a number of program management, engineering and technology areas. This paper provides a brief background on the Vision for Space Exploration and a general overview of potential key Goddard contributions. In particular, this paper focuses on describing relevant GSFC information systems capabilities in architecture development; interoperable command, control and communications; and other applied information systems technology/research activities that are applicable to support the Vision for Space Exploration goals. Current GSFC development efforts and task activities are presented together with future plans.
Computer Vision System For Locating And Identifying Defects In Hardwood Lumber
NASA Astrophysics Data System (ADS)
Conners, Richard W.; Ng, Chong T.; Cho, Tai-Hoon; McMillin, Charles W.
1989-03-01
This paper describes research aimed at developing an automatic cutup system for use in the rough mills of the hardwood furniture and fixture industry. In particular, this paper describes attempts to create the vision system that will power this automatic cutup system. There are a number of factors that make the development of such a vision system a challenge. First there is the innate variability of the wood material itself. No two species look exactly the same, in fact, they can have a significant visual difference in appearance among species. Yet a truly robust vision system must be able to handle a variety of such species, preferably with no operator intervention required when changing from one species to another. Secondly, there is a good deal of variability in the definition of what constitutes a removable defect. The hardwood furniture and fixture industry is diverse in the nature of the products that it makes. The products range from hardwood flooring to fancy hardwood furniture, from simple mill work to kitchen cabinets. Thus depending on the manufacturer, the product, and the quality of the product the nature of what constitutes a removable defect can and does vary. The vision system must be such that it can be tailored to meet each of these unique needs, preferably without any additional program modifications. This paper will describe the vision system that has been developed. It will assess the current system capabilities, and it will discuss the directions for future research. It will be argued that artificial intelligence methods provide a natural mechanism for attacking this computer vision application.
Audible vision for the blind and visually impaired in indoor open spaces.
Yu, Xunyi; Ganz, Aura
2012-01-01
In this paper we introduce Audible Vision, a system that can help blind and visually impaired users navigate in large indoor open spaces. The system uses computer vision to estimate the location and orientation of the user, and enables the user to perceive his/her relative position to a landmark through 3D audio. Testing shows that Audible Vision can work reliably in real-life ever-changing environment crowded with people.
Relating Standardized Visual Perception Measures to Simulator Visual System Performance
NASA Technical Reports Server (NTRS)
Kaiser, Mary K.; Sweet, Barbara T.
2013-01-01
Human vision is quantified through the use of standardized clinical vision measurements. These measurements typically include visual acuity (near and far), contrast sensitivity, color vision, stereopsis (a.k.a. stereo acuity), and visual field periphery. Simulator visual system performance is specified in terms such as brightness, contrast, color depth, color gamut, gamma, resolution, and field-of-view. How do these simulator performance characteristics relate to the perceptual experience of the pilot in the simulator? In this paper, visual acuity and contrast sensitivity will be related to simulator visual system resolution, contrast, and dynamic range; similarly, color vision will be related to color depth/color gamut. Finally, we will consider how some characteristics of human vision not typically included in current clinical assessments could be used to better inform simulator requirements (e.g., relating dynamic characteristics of human vision to update rate and other temporal display characteristics).
Learning from failure in health care: frequent opportunities, pervasive barriers.
Edmondson, A C
2004-12-01
The notion that hospitals and medical practices should learn from failures, both their own and others', has obvious appeal. Yet, healthcare organisations that systematically and effectively learn from the failures that occur in the care delivery process, especially from small mistakes and problems rather than from consequential adverse events, are rare. This article explores pervasive barriers embedded in healthcare's organisational systems that make shared or organisational learning from failure difficult and then recommends strategies for overcoming these barriers to learning from failure, emphasising the critical role of leadership. Firstly, leaders must create a compelling vision that motivates and communicates urgency for change; secondly, leaders must work to create an environment of psychological safety that fosters open reporting, active questioning, and frequent sharing of insights and concerns; and thirdly, case study research on one hospital's organisational learning initiative suggests that leaders can empower and support team learning throughout their organisations as a way of identifying, analysing, and removing hazards that threaten patient safety.
Laser Spot Tracking Based on Modified Circular Hough Transform and Motion Pattern Analysis
Krstinić, Damir; Skelin, Ana Kuzmanić; Milatić, Ivan
2014-01-01
Laser pointers are one of the most widely used interactive and pointing devices in different human-computer interaction systems. Existing approaches to vision-based laser spot tracking are designed for controlled indoor environments with the main assumption that the laser spot is very bright, if not the brightest, spot in images. In this work, we are interested in developing a method for an outdoor, open-space environment, which could be implemented on embedded devices with limited computational resources. Under these circumstances, none of the assumptions of existing methods for laser spot tracking can be applied, yet a novel and fast method with robust performance is required. Throughout the paper, we will propose and evaluate an efficient method based on modified circular Hough transform and Lucas–Kanade motion analysis. Encouraging results on a representative dataset demonstrate the potential of our method in an uncontrolled outdoor environment, while achieving maximal accuracy indoors. Our dataset and ground truth data are made publicly available for further development. PMID:25350502
Learning from failure in health care: frequent opportunities, pervasive barriers
Edmondson, A
2004-01-01
The notion that hospitals and medical practices should learn from failures, both their own and others', has obvious appeal. Yet, healthcare organisations that systematically and effectively learn from the failures that occur in the care delivery process, especially from small mistakes and problems rather than from consequential adverse events, are rare. This article explores pervasive barriers embedded in healthcare's organisational systems that make shared or organisational learning from failure difficult and then recommends strategies for overcoming these barriers to learning from failure, emphasising the critical role of leadership. Firstly, leaders must create a compelling vision that motivates and communicates urgency for change; secondly, leaders must work to create an environment of psychological safety that fosters open reporting, active questioning, and frequent sharing of insights and concerns; and thirdly, case study research on one hospital's organisational learning initiative suggests that leaders can empower and support team learning throughout their organisations as a way of identifying, analysing, and removing hazards that threaten patient safety. PMID:15576689
Open Globe Injury Patient Identification in Warfare Clinical Notes1
Apostolova, Emilia; White, Helen A.; Morris, Patty A.; Eliason, David A.; Velez, Tom
2017-01-01
The aim of this study is to utilize the Defense and Veterans Eye Injury and Vision Registry clinical data derived from DoD and VA medical systems which include documentation of care while in combat, and develop methods for comprehensive and reliable Open Globe Injury (OGI) patient identification. In particular, we focus on the use of free-form clinical notes, since structured data, such as diagnoses or procedure codes, as found in early post-trauma clinical records, may not be a comprehensive and reliable indicator of OGIs. The challenges of the task include low incidence rate (few positive examples), idiosyncratic military ophthalmology vocabulary, extreme brevity of notes, specialized abbreviations, typos and misspellings. We modeled the problem as a text classification task and utilized a combination of supervised learning (SVMs) and word embeddings learnt in a unsupervised manner, achieving a precision of 92.50% and a recall of89.83%o. The described techniques are applicable to patient cohort identification with limited training data and low incidence rate. PMID:29854104
Laser spot tracking based on modified circular Hough transform and motion pattern analysis.
Krstinić, Damir; Skelin, Ana Kuzmanić; Milatić, Ivan
2014-10-27
Laser pointers are one of the most widely used interactive and pointing devices in different human-computer interaction systems. Existing approaches to vision-based laser spot tracking are designed for controlled indoor environments with the main assumption that the laser spot is very bright, if not the brightest, spot in images. In this work, we are interested in developing a method for an outdoor, open-space environment, which could be implemented on embedded devices with limited computational resources. Under these circumstances, none of the assumptions of existing methods for laser spot tracking can be applied, yet a novel and fast method with robust performance is required. Throughout the paper, we will propose and evaluate an efficient method based on modified circular Hough transform and Lucas-Kanade motion analysis. Encouraging results on a representative dataset demonstrate the potential of our method in an uncontrolled outdoor environment, while achieving maximal accuracy indoors. Our dataset and ground truth data are made publicly available for further development.
Use of 3D vision for fine robot motion
NASA Technical Reports Server (NTRS)
Lokshin, Anatole; Litwin, Todd
1989-01-01
An integration of 3-D vision systems with robot manipulators will allow robots to operate in a poorly structured environment by visually locating targets and obstacles. However, by using computer vision for objects acquisition makes the problem of overall system calibration even more difficult. Indeed, in a CAD based manipulation a control architecture has to find an accurate mapping between the 3-D Euclidean work space and a robot configuration space (joint angles). If a stereo vision is involved, then one needs to map a pair of 2-D video images directly into the robot configuration space. Neural Network approach aside, a common solution to this problem is to calibrate vision and manipulator independently, and then tie them via common mapping into the task space. In other words, both vision and robot refer to some common Absolute Euclidean Coordinate Frame via their individual mappings. This approach has two major difficulties. First a vision system has to be calibrated over the total work space. And second, the absolute frame, which is usually quite arbitrary, has to be the same with a high degree of precision for both robot and vision subsystem calibrations. The use of computer vision to allow robust fine motion manipulation in a poorly structured world which is currently in progress is described along with the preliminary results and encountered problems.
Three-dimensional vision enhances task performance independently of the surgical method.
Wagner, O J; Hagen, M; Kurmann, A; Horgan, S; Candinas, D; Vorburger, S A
2012-10-01
Within the next few years, the medical industry will launch increasingly affordable three-dimensional (3D) vision systems for the operating room (OR). This study aimed to evaluate the effect of two-dimensional (2D) and 3D visualization on surgical skills and task performance. In this study, 34 individuals with varying laparoscopic experience (18 inexperienced individuals) performed three tasks to test spatial relationships, grasping and positioning, dexterity, precision, and hand-eye and hand-hand coordination. Each task was performed in 3D using binocular vision for open performance, the Viking 3Di Vision System for laparoscopic performance, and the DaVinci robotic system. The same tasks were repeated in 2D using an eye patch for monocular vision, conventional laparoscopy, and the DaVinci robotic system. Loss of 3D vision significantly increased the perceived difficulty of a task and the time required to perform it, independently of the approach (P < 0.0001-0.02). Simple tasks took 25 % to 30 % longer to complete and more complex tasks took 75 % longer with 2D than with 3D vision. Only the difficult task was performed faster with the robot than with laparoscopy (P = 0.005). In every case, 3D robotic performance was superior to conventional laparoscopy (2D) (P < 0.001-0.015). The more complex the task, the more 3D vision accelerates task completion compared with 2D vision. The gain in task performance is independent of the surgical method.
Design of signal reception and processing system of embedded ultrasonic endoscope
NASA Astrophysics Data System (ADS)
Li, Ming; Yu, Feng; Zhang, Ruiqiang; Li, Yan; Chen, Xiaodong; Yu, Daoyin
2009-11-01
Embedded Ultrasonic Endoscope, based on embedded microprocessor and embedded real-time operating system, sends a micro ultrasonic probe into coelom through the biopsy channel of the Electronic Endoscope to get the fault histology features of digestive organs by rotary scanning, and acquires the pictures of the alimentary canal mucosal surface. At the same time, ultrasonic signals are processed by signal reception and processing system, forming images of the full histology of the digestive organs. Signal Reception and Processing System is an important component of Embedded Ultrasonic Endoscope. However, the traditional design, using multi-level amplifiers and special digital processing circuits to implement signal reception and processing, is no longer satisfying the standards of high-performance, miniaturization and low power requirements that embedded system requires, and as a result of the high noise that multi-level amplifier brought, the extraction of small signal becomes hard. Therefore, this paper presents a method of signal reception and processing based on double variable gain amplifier and FPGA, increasing the flexibility and dynamic range of the Signal Reception and Processing System, improving system noise level, and reducing power consumption. Finally, we set up the embedded experiment system, using a transducer with the center frequency of 8MHz to scan membrane samples, and display the image of ultrasonic echo reflected by each layer of membrane, with a frame rate of 5Hz, verifying the correctness of the system.
The precision measurement and assembly for miniature parts based on double machine vision systems
NASA Astrophysics Data System (ADS)
Wang, X. D.; Zhang, L. F.; Xin, M. Z.; Qu, Y. Q.; Luo, Y.; Ma, T. M.; Chen, L.
2015-02-01
In the process of miniature parts' assembly, the structural features on the bottom or side of the parts often need to be aligned and positioned. The general assembly equipment integrated with one vertical downward machine vision system cannot satisfy the requirement. A precision automatic assembly equipment was developed with double machine vision systems integrated. In the system, a horizontal vision system is employed to measure the position of the feature structure at the parts' side view, which cannot be seen with the vertical one. The position measured by horizontal camera is converted to the vertical vision system with the calibration information. By careful calibration, the parts' alignment and positioning in the assembly process can be guaranteed. The developed assembly equipment has the characteristics of easy implementation, modularization and high cost performance. The handling of the miniature parts and assembly procedure were briefly introduced. The calibration procedure was given and the assembly error was analyzed for compensation.
Teaching Embedded System Concepts for Technological Literacy
ERIC Educational Resources Information Center
Winzker, M.; Schwandt, A.
2011-01-01
A basic understanding of technology is recognized as important knowledge even for students not connected with engineering and computer science. This paper shows that embedded system concepts can be taught in a technological literacy course. An embedded system teaching block that has been used in an electronics module for non-engineers is…
A telepresence robot system realized by embedded object concept
NASA Astrophysics Data System (ADS)
Vallius, Tero; Röning, Juha
2006-10-01
This paper presents the Embedded Object Concept (EOC) and a telepresence robot system which is a test case for the EOC. The EOC utilizes common object-oriented methods used in software by applying them to combined Lego-like software-hardware entities. These entities represent objects in object-oriented design methods, and they are the building blocks of embedded systems. The goal of the EOC is to make the designing embedded systems faster and easier. This concept enables people without comprehensive knowledge in electronics design to create new embedded systems, and for experts it shortens the design time of new embedded systems. We present the current status of a telepresence robot created with second-generation Atomi-objects, which is the name for our implementation of the embedded objects. The telepresence robot is a relatively complex test case for the EOC. The robot has been constructed using incremental device development, which is made possible by the architecture of the EOC. The robot contains video and audio exchange capability and a controlling system for driving with two wheels. The robot is built in two versions, the first consisting of a PC device and Atomi-objects, and the second consisting of only Atomi-objects. The robot is currently incomplete, but most of it has been successfully tested.
Computer vision for foreign body detection and removal in the food industry
USDA-ARS?s Scientific Manuscript database
Computer vision inspection systems are often used for quality control, product grading, defect detection and other product evaluation issues. This chapter focuses on the use of computer vision inspection systems that detect foreign bodies and remove them from the product stream. Specifically, we wi...
A smart telerobotic system driven by monocular vision
NASA Technical Reports Server (NTRS)
Defigueiredo, R. J. P.; Maccato, A.; Wlczek, P.; Denney, B.; Scheerer, J.
1994-01-01
A robotic system that accepts autonomously generated motion and control commands is described. The system provides images from the monocular vision of a camera mounted on a robot's end effector, eliminating the need for traditional guidance targets that must be predetermined and specifically identified. The telerobotic vision system presents different views of the targeted object relative to the camera, based on a single camera image and knowledge of the target's solid geometry.
Microscope self-calibration based on micro laser line imaging and soft computing algorithms
NASA Astrophysics Data System (ADS)
Apolinar Muñoz Rodríguez, J.
2018-06-01
A technique to perform microscope self-calibration via micro laser line and soft computing algorithms is presented. In this technique, the microscope vision parameters are computed by means of soft computing algorithms based on laser line projection. To implement the self-calibration, a microscope vision system is constructed by means of a CCD camera and a 38 μm laser line. From this arrangement, the microscope vision parameters are represented via Bezier approximation networks, which are accomplished through the laser line position. In this procedure, a genetic algorithm determines the microscope vision parameters by means of laser line imaging. Also, the approximation networks compute the three-dimensional vision by means of the laser line position. Additionally, the soft computing algorithms re-calibrate the vision parameters when the microscope vision system is modified during the vision task. The proposed self-calibration improves accuracy of the traditional microscope calibration, which is accomplished via external references to the microscope system. The capability of the self-calibration based on soft computing algorithms is determined by means of the calibration accuracy and the micro-scale measurement error. This contribution is corroborated by an evaluation based on the accuracy of the traditional microscope calibration.
Using Vision System Technologies for Offset Approaches in Low Visibility Operations
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Bailey, Randall E.; Ellis, Kyle K.
2015-01-01
Flight deck-based vision systems, such as Synthetic Vision Systems (SVS) and Enhanced Flight Vision Systems (EFVS), have the potential to provide additional margins of safety for aircrew performance and enable the implementation of operational improvements for low visibility surface, arrival, and departure operations in the terminal environment with equivalent efficiency to visual operations. Twelve air transport-rated crews participated in a motion-base simulation experiment to evaluate the use of SVS/EFVS in Next Generation Air Transportation System low visibility approach and landing operations at Chicago O'Hare airport. Three monochromatic, collimated head-up display (HUD) concepts (conventional HUD, SVS HUD, and EFVS HUD) and three instrument approach types (straight-in, 3-degree offset, 15-degree offset) were experimentally varied to test the efficacy of the SVS/EFVS HUD concepts for offset approach operations. The findings suggest making offset approaches in low visibility conditions with an EFVS HUD or SVS HUD appear feasible. Regardless of offset approach angle or HUD concept being flown, all approaches had comparable ILS tracking during the instrument segment and were within the lateral confines of the runway with acceptable sink rates during the visual segment of the approach. Keywords: Enhanced Flight Vision Systems; Synthetic Vision Systems; Head-up Display; NextGen
Static Schedulers for Embedded Real-Time Systems
1989-12-01
Because of the need for having efficient scheduling algorithms in large scale real time systems , software engineers put a lot of effort on developing...provide static schedulers for he Embedded Real Time Systems with single processor using Ada programming language. The independent nonpreemptable...support the Computer Aided Rapid Prototyping for Embedded Real Time Systems so that we determine whether the system, as designed, meets the required
Subsystem real-time time dependent density functional theory.
Krishtal, Alisa; Ceresoli, Davide; Pavanello, Michele
2015-04-21
We present the extension of Frozen Density Embedding (FDE) formulation of subsystem Density Functional Theory (DFT) to real-time Time Dependent Density Functional Theory (rt-TDDFT). FDE is a DFT-in-DFT embedding method that allows to partition a larger Kohn-Sham system into a set of smaller, coupled Kohn-Sham systems. Additional to the computational advantage, FDE provides physical insight into the properties of embedded systems and the coupling interactions between them. The extension to rt-TDDFT is done straightforwardly by evolving the Kohn-Sham subsystems in time simultaneously, while updating the embedding potential between the systems at every time step. Two main applications are presented: the explicit excitation energy transfer in real time between subsystems is demonstrated for the case of the Na4 cluster and the effect of the embedding on optical spectra of coupled chromophores. In particular, the importance of including the full dynamic response in the embedding potential is demonstrated.
The ocean sampling day consortium.
Kopf, Anna; Bicak, Mesude; Kottmann, Renzo; Schnetzer, Julia; Kostadinov, Ivaylo; Lehmann, Katja; Fernandez-Guerra, Antonio; Jeanthon, Christian; Rahav, Eyal; Ullrich, Matthias; Wichels, Antje; Gerdts, Gunnar; Polymenakou, Paraskevi; Kotoulas, Giorgos; Siam, Rania; Abdallah, Rehab Z; Sonnenschein, Eva C; Cariou, Thierry; O'Gara, Fergal; Jackson, Stephen; Orlic, Sandi; Steinke, Michael; Busch, Julia; Duarte, Bernardo; Caçador, Isabel; Canning-Clode, João; Bobrova, Oleksandra; Marteinsson, Viggo; Reynisson, Eyjolfur; Loureiro, Clara Magalhães; Luna, Gian Marco; Quero, Grazia Marina; Löscher, Carolin R; Kremp, Anke; DeLorenzo, Marie E; Øvreås, Lise; Tolman, Jennifer; LaRoche, Julie; Penna, Antonella; Frischer, Marc; Davis, Timothy; Katherine, Barker; Meyer, Christopher P; Ramos, Sandra; Magalhães, Catarina; Jude-Lemeilleur, Florence; Aguirre-Macedo, Ma Leopoldina; Wang, Shiao; Poulton, Nicole; Jones, Scott; Collin, Rachel; Fuhrman, Jed A; Conan, Pascal; Alonso, Cecilia; Stambler, Noga; Goodwin, Kelly; Yakimov, Michael M; Baltar, Federico; Bodrossy, Levente; Van De Kamp, Jodie; Frampton, Dion Mf; Ostrowski, Martin; Van Ruth, Paul; Malthouse, Paul; Claus, Simon; Deneudt, Klaas; Mortelmans, Jonas; Pitois, Sophie; Wallom, David; Salter, Ian; Costa, Rodrigo; Schroeder, Declan C; Kandil, Mahrous M; Amaral, Valentina; Biancalana, Florencia; Santana, Rafael; Pedrotti, Maria Luiza; Yoshida, Takashi; Ogata, Hiroyuki; Ingleton, Tim; Munnik, Kate; Rodriguez-Ezpeleta, Naiara; Berteaux-Lecellier, Veronique; Wecker, Patricia; Cancio, Ibon; Vaulot, Daniel; Bienhold, Christina; Ghazal, Hassan; Chaouni, Bouchra; Essayeh, Soumya; Ettamimi, Sara; Zaid, El Houcine; Boukhatem, Noureddine; Bouali, Abderrahim; Chahboune, Rajaa; Barrijal, Said; Timinouni, Mohammed; El Otmani, Fatima; Bennani, Mohamed; Mea, Marianna; Todorova, Nadezhda; Karamfilov, Ventzislav; Ten Hoopen, Petra; Cochrane, Guy; L'Haridon, Stephane; Bizsel, Kemal Can; Vezzi, Alessandro; Lauro, Federico M; Martin, Patrick; Jensen, Rachelle M; Hinks, Jamie; Gebbels, Susan; Rosselli, Riccardo; De Pascale, Fabio; Schiavon, Riccardo; Dos Santos, Antonina; Villar, Emilie; Pesant, Stéphane; Cataletto, Bruno; Malfatti, Francesca; Edirisinghe, Ranjith; Silveira, Jorge A Herrera; Barbier, Michele; Turk, Valentina; Tinta, Tinkara; Fuller, Wayne J; Salihoglu, Ilkay; Serakinci, Nedime; Ergoren, Mahmut Cerkez; Bresnan, Eileen; Iriberri, Juan; Nyhus, Paul Anders Fronth; Bente, Edvardsen; Karlsen, Hans Erik; Golyshin, Peter N; Gasol, Josep M; Moncheva, Snejana; Dzhembekova, Nina; Johnson, Zackary; Sinigalliano, Christopher David; Gidley, Maribeth Louise; Zingone, Adriana; Danovaro, Roberto; Tsiamis, George; Clark, Melody S; Costa, Ana Cristina; El Bour, Monia; Martins, Ana M; Collins, R Eric; Ducluzeau, Anne-Lise; Martinez, Jonathan; Costello, Mark J; Amaral-Zettler, Linda A; Gilbert, Jack A; Davies, Neil; Field, Dawn; Glöckner, Frank Oliver
2015-01-01
Ocean Sampling Day was initiated by the EU-funded Micro B3 (Marine Microbial Biodiversity, Bioinformatics, Biotechnology) project to obtain a snapshot of the marine microbial biodiversity and function of the world's oceans. It is a simultaneous global mega-sequencing campaign aiming to generate the largest standardized microbial data set in a single day. This will be achievable only through the coordinated efforts of an Ocean Sampling Day Consortium, supportive partnerships and networks between sites. This commentary outlines the establishment, function and aims of the Consortium and describes our vision for a sustainable study of marine microbial communities and their embedded functional traits.
An architecture for real-time vision processing
NASA Technical Reports Server (NTRS)
Chien, Chiun-Hong
1994-01-01
To study the feasibility of developing an architecture for real time vision processing, a task queue server and parallel algorithms for two vision operations were designed and implemented on an i860-based Mercury Computing System 860VS array processor. The proposed architecture treats each vision function as a task or set of tasks which may be recursively divided into subtasks and processed by multiple processors coordinated by a task queue server accessible by all processors. Each idle processor subsequently fetches a task and associated data from the task queue server for processing and posts the result to shared memory for later use. Load balancing can be carried out within the processing system without the requirement for a centralized controller. The author concludes that real time vision processing cannot be achieved without both sequential and parallel vision algorithms and a good parallel vision architecture.
A Machine Vision Quality Control System for Industrial Acrylic Fibre Production
NASA Astrophysics Data System (ADS)
Heleno, Paulo; Davies, Roger; Correia, Bento A. Brázio; Dinis, João
2002-12-01
This paper describes the implementation of INFIBRA, a machine vision system used in the quality control of acrylic fibre production. The system was developed by INETI under a contract with a leading industrial manufacturer of acrylic fibres. It monitors several parameters of the acrylic production process. This paper presents, after a brief overview of the system, a detailed description of the machine vision algorithms developed to perform the inspection tasks unique to this system. Some of the results of online operation are also presented.
CT Image Sequence Analysis for Object Recognition - A Rule-Based 3-D Computer Vision System
Dongping Zhu; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman
1991-01-01
Research is now underway to create a vision system for hardwood log inspection using a knowledge-based approach. In this paper, we present a rule-based, 3-D vision system for locating and identifying wood defects using topological, geometric, and statistical attributes. A number of different features can be derived from the 3-D input scenes. These features and evidence...
NASA Technical Reports Server (NTRS)
Prinzel, L.J.; Kramer, L.J.
2009-01-01
A synthetic vision system is an aircraft cockpit display technology that presents the visual environment external to the aircraft using computer-generated imagery in a manner analogous to how it would appear to the pilot if forward visibility were not restricted. The purpose of this chapter is to review the state of synthetic vision systems, and discuss selected human factors issues that should be considered when designing such displays.
Advanced helmet vision system (AHVS) integrated night vision helmet mounted display (HMD)
NASA Astrophysics Data System (ADS)
Ashcraft, Todd W.; Atac, Robert
2012-06-01
Gentex Corporation, under contract to Naval Air Systems Command (AIR 4.0T), designed the Advanced Helmet Vision System to provide aircrew with 24-hour, visor-projected binocular night vision and HMD capability. AHVS integrates numerous key technologies, including high brightness Light Emitting Diode (LED)-based digital light engines, advanced lightweight optical materials and manufacturing processes, and innovations in graphics processing software. This paper reviews the current status of miniaturization and integration with the latest two-part Gentex modular helmet, highlights the lessons learned from previous AHVS phases, and discusses plans for qualification and flight testing.
COMPARISON OF RECENTLY USED PHACOEMULSIFICATION SYSTEMS USING A HEALTH TECHNOLOGY ASSESSMENT METHOD.
Huang, Jiannan; Wang, Qi; Zhao, Caimin; Ying, Xiaohua; Zou, Haidong
2017-01-01
To compare the recently used phacoemulsification systems using a health technology assessment (HTA) model. A self-administered questionnaire, which included questions to gauge on the opinions of the recently used phacoemulsification systems, was distributed to the chief cataract surgeons in the departments of ophthalmology of eighteen tertiary hospitals in Shanghai, China. A series of senile cataract patients undergoing phacoemulsification surgery were enrolled in the study. The surgical results and the average costs related to their surgeries were all recorded and compared for the recently used phacoemulsification systems. The four phacoemulsification systems currently used in Shanghai are the Infiniti Vision, Centurion Vision, WhiteStar Signature, and Stellaris Vision Enhancement systems. All of the doctors confirmed that the systems they used would help cataract patients recover vision. A total of 150 cataract patients who underwent phacoemulsification surgery were enrolled in the present study. A significant difference was found among the four groups in cumulative dissipated energy, with the lowest value found in the Centurion group. No serious complications were observed and a positive trend in visual acuity was found in all four groups after cataract surgery. The highest total cost of surgery was associated with procedures conducted using the Centurion Vision system, and significant differences between systems were mainly because of the cost of the consumables used in the different surgeries. This HTA comparison of four recently used phacoemulsification systems found that each of system offers a satisfactory vision recovery outcome, but differs in surgical efficacy and costs.
Embedded arrays of vertically aligned carbon nanotube carpets and methods for making them
Kim, Myung Jong; Nicholas, Nolan Walker; Kittrell, W. Carter; Schmidt, Howard K.
2015-06-30
According to some embodiments, the present invention provides a system and method for supporting a carbon nanotube array that involve an entangled carbon nanotube mat integral with the array, where the mat is embedded in an embedding material. The embedding material may be depositable on a carbon nanotube. A depositable material may be metallic or nonmetallic. The embedding material may be an adhesive material. The adhesive material may optionally be mixed with a metal powder. The embedding material may be supported by a substrate or self-supportive. The embedding material may be conductive or nonconductive. The system and method provide superior mechanical and, when applicable, electrical, contact between the carbon nanotubes in the array and the embedding material. The optional use of a conductive material for the embedding material provides a mechanism useful for integration of carbon nanotube arrays into electronic devices.
Task-focused modeling in automated agriculture
NASA Astrophysics Data System (ADS)
Vriesenga, Mark R.; Peleg, K.; Sklansky, Jack
1993-01-01
Machine vision systems analyze image data to carry out automation tasks. Our interest is in machine vision systems that rely on models to achieve their designed task. When the model is interrogated from an a priori menu of questions, the model need not be complete. Instead, the machine vision system can use a partial model that contains a large amount of information in regions of interest and less information elsewhere. We propose an adaptive modeling scheme for machine vision, called task-focused modeling, which constructs a model having just sufficient detail to carry out the specified task. The model is detailed in regions of interest to the task and is less detailed elsewhere. This focusing effect saves time and reduces the computational effort expended by the machine vision system. We illustrate task-focused modeling by an example involving real-time micropropagation of plants in automated agriculture.
Smart Prosthetic Hand Technology - Phase 2
2011-05-01
identification and estimation, hand motion estimation, intelligent embedded systems and control, robotic hand and biocompatibility and signaling. The...Smart Prosthetics, Bio- Robotics , Intelligent EMG Signal Processing, Embedded Systems and Intelligent Control, Inflammatory Responses of Cells, Toxicity...estimation, intelligent embedded systems and control, robotic hand and biocompatibility and signaling. The developed identification algorithm using a new
Health system vision of iran in 2025.
Rostamigooran, N; Esmailzadeh, H; Rajabi, F; Majdzadeh, R; Larijani, B; Dastgerdi, M Vahid
2013-01-01
Vast changes in disease features and risk factors and influence of demographic, economical, and social trends on health system, makes formulating a long term evolutionary plan, unavoidable. In this regard, to determine health system vision in a long term horizon is a primary stage. After narrative and purposeful review of documentaries, major themes of vision statement were determined and its context was organized in a work group consist of selected managers and experts of health system. Final content of the statement was prepared after several sessions of group discussions and receiving ideas of policy makers and experts of health system. Vision statement in evolutionary plan of health system is considered to be :"a progressive community in the course of human prosperity which has attained to a developed level of health standards in the light of the most efficient and equitable health system in visionary region(1) and with the regarding to health in all policies, accountability and innovation". An explanatory context was compiled either to create a complete image of the vision. Social values and leaders' strategic goals, and also main orientations are generally mentioned in vision statement. In this statement prosperity and justice are considered as major values and ideals in society of Iran; development and excellence in the region as leaders' strategic goals; and also considering efficiency and equality, health in all policies, and accountability and innovation as main orientations of health system.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-18
... Document--Draft DO-XXX, Minimum Aviation Performance Standards (MASPS) for an Enhanced Flight Vision System... Discussion (9:00 a.m.-5:00 p.m.) Provide Comment Resolution of Document--Draft DO-XXX, Minimum Aviation.../Approve FRAC Draft for PMC Consideration--Draft DO- XXX, Minimum Aviation Performance Standards (MASPS...
Robust Spatial Autoregressive Modeling for Hardwood Log Inspection
Dongping Zhu; A.A. Beex
1994-01-01
We explore the application of a stochastic texture modeling method toward a machine vision system for log inspection in the forest products industry. This machine vision system uses computerized tomography (CT) imaging to locate and identify internal defects in hardwood logs. The application of CT to such industrial vision problems requires efficient and robust image...
Augmentation of Cognition and Perception Through Advanced Synthetic Vision Technology
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III; Kramer, Lynda J.; Bailey, Randall E.; Arthur, Jarvis J.; Williams, Steve P.; McNabb, Jennifer
2005-01-01
Synthetic Vision System technology augments reality and creates a virtual visual meteorological condition that extends a pilot's cognitive and perceptual capabilities during flight operations when outside visibility is restricted. The paper describes the NASA Synthetic Vision System for commercial aviation with an emphasis on how the technology achieves Augmented Cognition objectives.
Industrial Inspection with Open Eyes: Advance with Machine Vision Technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Zheng; Ukida, H.; Niel, Kurt
Machine vision systems have evolved significantly with the technology advances to tackle the challenges from modern manufacturing industry. A wide range of industrial inspection applications for quality control are benefiting from visual information captured by different types of cameras variously configured in a machine vision system. This chapter screens the state of the art in machine vision technologies in the light of hardware, software tools, and major algorithm advances for industrial inspection. The inspection beyond visual spectrum offers a significant complementary to the visual inspection. The combination with multiple technologies makes it possible for the inspection to achieve a bettermore » performance and efficiency in varied applications. The diversity of the applications demonstrates the great potential of machine vision systems for industry.« less
Reimer, Bryan; Mehler, Bruce; Reagan, Ian; Kidd, David; Dobres, Jonathan
2016-12-01
There is limited research on trade-offs in demand between manual and voice interfaces of embedded and portable technologies. Mehler et al. identified differences in driving performance, visual engagement and workload between two contrasting embedded vehicle system designs (Chevrolet MyLink and Volvo Sensus). The current study extends this work by comparing these embedded systems with a smartphone (Samsung Galaxy S4). None of the voice interfaces eliminated visual demand. Relative to placing calls manually, both embedded voice interfaces resulted in less eyes-off-road time than the smartphone. Errors were most frequent when calling contacts using the smartphone. The smartphone and MyLink allowed addresses to be entered using compound voice commands resulting in shorter eyes-off-road time compared with the menu-based Sensus but with many more errors. Driving performance and physiological measures indicated increased demand when performing secondary tasks relative to 'just driving', but were not significantly different between the smartphone and embedded systems. Practitioner Summary: The findings show that embedded system and portable device voice interfaces place fewer visual demands on the driver than manual interfaces, but they also underscore how differences in system designs can significantly affect not only the demands placed on drivers, but also the successful completion of tasks.
NASA Technical Reports Server (NTRS)
Halem, Milton
1999-01-01
In a recent address at the California Science Center in Los Angeles, Vice President Al Gore articulated a Digital Earth Vision. That vision spoke to developing a multi-resolution, three-dimensional visual representation of the planet into which we can roam and zoom into vast quantities of embedded geo-referenced data. The vision was not limited to moving through space, but also allowing travel over a time-line, which can be set for days, years, centuries, or even geological epochs. A working group of Federal Agencies, developing a coordinated program to implement the Vice President's vision, developed the definition of the Digital Earth as a visual representation of our planet that enables a person to explore and interact with the vast amounts of natural and cultural geo-referenced information gathered about the Earth. One of the challenges identified by the agencies was whether the technology existed that would be available to permanently store and deliver all the digital data that enterprises might want to save for decades and centuries. Satellite digital data is growing by Moore's Law as is the growth of computer generated data. Similarly, the density of digital storage media in our information-intensive society is also increasing by a factor of four every three years. The technological bottleneck is that the bandwidth for transferring data is only growing at a factor of four every nine years. This implies that the migration of data to viable long-term storage is growing more slowly. The implication is that older data stored on increasingly obsolete media are at considerable risk if they cannot be continuously migrated to media with longer life times. Another problem occurs when the software and hardware systems for which the media were designed are no longer serviced by their manufacturers. Many instances exist where support for these systems are phased out after mergers or even in going out of business. In addition, survivability of older media can suffer from physical breakdown of components (e.g. tapes simply lose their magnetic properties after a long time in storage). As a result, a potential data survivability crisis is emerging. The scale of the crisis is comparable to that facing the Social Security System. Sometime in one or two decades, the exponential growth of data will become so great that many enterprises will not be able to migrate through their data to more permanent media during the lifetime of the media on which it resides. This will result in significant losses of data and their resultant impacts. To avoid this crisis, we need to plan and devote greater financial and intellectual resources are needed for the development and refinement of new storage media and migration technologies in order to preserve all data any organization determines worth saving permanently. This talk will explore technological solutions and suggested recommendations to address this technological data crisis.
Design And Implementation Of Integrated Vision-Based Robotic Workcells
NASA Astrophysics Data System (ADS)
Chen, Michael J.
1985-01-01
Reports have been sparse on large-scale, intelligent integration of complete robotic systems for automating the microelectronics industry. This paper describes the application of state-of-the-art computer-vision technology for manufacturing of miniaturized electronic components. The concepts of FMS - Flexible Manufacturing Systems, work cells, and work stations and their control hierarchy are illustrated in this paper. Several computer-controlled work cells used in the production of thin-film magnetic heads are described. These cells use vision for in-process control of head-fixture alignment and real-time inspection of production parameters. The vision sensor and other optoelectronic sensors, coupled with transport mechanisms such as steppers, x-y-z tables, and robots, have created complete sensorimotor systems. These systems greatly increase the manufacturing throughput as well as the quality of the final product. This paper uses these automated work cells as examples to exemplify the underlying design philosophy and principles in the fabrication of vision-based robotic systems.
Zhan, Dong; Yu, Long; Xiao, Jian; Chen, Tanglong
2015-04-14
Railway tunnel 3D clearance inspection is critical to guaranteeing railway operation safety. However, it is a challenge to inspect railway tunnel 3D clearance using a vision system, because both the spatial range and field of view (FOV) of such measurements are quite large. This paper summarizes our work on dynamic railway tunnel 3D clearance inspection based on a multi-camera and structured-light vision system (MSVS). First, the configuration of the MSVS is described. Then, the global calibration for the MSVS is discussed in detail. The onboard vision system is mounted on a dedicated vehicle and is expected to suffer from multiple degrees of freedom vibrations brought about by the running vehicle. Any small vibration can result in substantial measurement errors. In order to overcome this problem, a vehicle motion deviation rectifying method is investigated. Experiments using the vision inspection system are conducted with satisfactory online measurement results.
Scheduling of network access for feedback-based embedded systems
NASA Astrophysics Data System (ADS)
Liberatore, Vincenzo
2002-07-01
nd communication capabilities. Examples range from smart dust embedded in building materials to networks of appliances in the home. Embedded devices will be deployed in unprecedented numbers, will enable pervasive distributed computing, and will radically change the way people interact with the surrounding environment [EGH00a]. The paper targets embedded systems and their real-time (RT) communication requirements. RT requirements arise from the
Tracking by Identification Using Computer Vision and Radio
Mandeljc, Rok; Kovačič, Stanislav; Kristan, Matej; Perš, Janez
2013-01-01
We present a novel system for detection, localization and tracking of multiple people, which fuses a multi-view computer vision approach with a radio-based localization system. The proposed fusion combines the best of both worlds, excellent computer-vision-based localization, and strong identity information provided by the radio system, and is therefore able to perform tracking by identification, which makes it impervious to propagated identity switches. We present comprehensive methodology for evaluation of systems that perform person localization in world coordinate system and use it to evaluate the proposed system as well as its components. Experimental results on a challenging indoor dataset, which involves multiple people walking around a realistically cluttered room, confirm that proposed fusion of both systems significantly outperforms its individual components. Compared to the radio-based system, it achieves better localization results, while at the same time it successfully prevents propagation of identity switches that occur in pure computer-vision-based tracking. PMID:23262485
Design of a surgical robot with dynamic vision field control for Single Port Endoscopic Surgery.
Kobayashi, Yo; Sekiguchi, Yuta; Tomono, Yu; Watanabe, Hiroki; Toyoda, Kazutaka; Konishi, Kozo; Tomikawa, Morimasa; Ieiri, Satoshi; Tanoue, Kazuo; Hashizume, Makoto; Fujie, Masaktsu G
2010-01-01
Recently, a robotic system was developed to assist Single Port Endoscopic Surgery (SPS). However, the existing system required a manual change of vision field, hindering the surgical task and increasing the degrees of freedom (DOFs) of the manipulator. We proposed a surgical robot for SPS with dynamic vision field control, the endoscope view being manipulated by a master controller. The prototype robot consisted of a positioning and sheath manipulator (6 DOF) for vision field control, and dual tool tissue manipulators (gripping: 5DOF, cautery: 3DOF). Feasibility of the robot was demonstrated in vitro. The "cut and vision field control" (using tool manipulators) is suitable for precise cutting tasks in risky areas while a "cut by vision field control" (using a vision field control manipulator) is effective for rapid macro cutting of tissues. A resection task was accomplished using a combination of both methods.
Perceptual organization in computer vision - A review and a proposal for a classificatory structure
NASA Technical Reports Server (NTRS)
Sarkar, Sudeep; Boyer, Kim L.
1993-01-01
The evolution of perceptual organization in biological vision, and its necessity in advanced computer vision systems, arises from the characteristic that perception, the extraction of meaning from sensory input, is an intelligent process. This is particularly so for high order organisms and, analogically, for more sophisticated computational models. The role of perceptual organization in computer vision systems is explored. This is done from four vantage points. First, a brief history of perceptual organization research in both humans and computer vision is offered. Next, a classificatory structure in which to cast perceptual organization research to clarify both the nomenclature and the relationships among the many contributions is proposed. Thirdly, the perceptual organization work in computer vision in the context of this classificatory structure is reviewed. Finally, the array of computational techniques applied to perceptual organization problems in computer vision is surveyed.
Huang, Chen; Muñoz-García, Ana Belén; Pavone, Michele
2016-12-28
Density-functional embedding theory provides a general way to perform multi-physics quantum mechanics simulations of large-scale materials by dividing the total system's electron density into a cluster's density and its environment's density. It is then possible to compute the accurate local electronic structures and energetics of the embedded cluster with high-level methods, meanwhile retaining a low-level description of the environment. The prerequisite step in the density-functional embedding theory is the cluster definition. In covalent systems, cutting across the covalent bonds that connect the cluster and its environment leads to dangling bonds (unpaired electrons). These represent a major obstacle for the application of density-functional embedding theory to study extended covalent systems. In this work, we developed a simple scheme to define the cluster in covalent systems. Instead of cutting covalent bonds, we directly split the boundary atoms for maintaining the valency of the cluster. With this new covalent embedding scheme, we compute the dehydrogenation energies of several different molecules, as well as the binding energy of a cobalt atom on graphene. Well localized cluster densities are observed, which can facilitate the use of localized basis sets in high-level calculations. The results are found to converge faster with the embedding method than the other multi-physics approach ONIOM. This work paves the way to perform the density-functional embedding simulations of heterogeneous systems in which different types of chemical bonds are present.
FLORA™: Phase I development of a functional vision assessment for prosthetic vision users.
Geruschat, Duane R; Flax, Marshall; Tanna, Nilima; Bianchi, Michelle; Fisher, Andy; Goldschmidt, Mira; Fisher, Lynne; Dagnelie, Gislin; Deremeik, Jim; Smith, Audrey; Anaflous, Fatima; Dorn, Jessy
2015-07-01
Research groups and funding agencies need a functional assessment suitable for an ultra-low vision population to evaluate the impact of new vision-restoration treatments. The purpose of this study was to develop a pilot assessment to capture the functional visual ability and well-being of subjects whose vision has been partially restored with the Argus II Retinal Prosthesis System. The Functional Low-Vision Observer Rated Assessment (FLORA) pilot assessment involved a self-report section, a list of functional visual tasks for observation of performance and a case narrative summary. Results were analysed to determine whether the interview questions and functional visual tasks were appropriate for this ultra-low vision population and whether the ratings suffered from floor or ceiling effects. Thirty subjects with severe to profound retinitis pigmentosa (bare light perception or worse in both eyes) were enrolled in a clinical trial and implanted with the Argus II System. From this population, 26 subjects were assessed with the FLORA. Seven different evaluators administered the assessment. All 14 interview questions were asked. All 35 tasks for functional vision were selected for evaluation at least once, with an average of 20 subjects being evaluated for each test item. All four rating options—impossible (33 per cent), difficult (23 per cent), moderate (24 per cent) and easy (19 per cent)—were used by the evaluators. Evaluators also judged the amount of vision they observed the subjects using to complete the various tasks, with 'vision only' occurring 75 per cent on average with the System ON, and 29 per cent with the System OFF. The first version of the FLORA was found to contain useful elements for evaluation and to avoid floor and ceiling effects. The next phase of development will be to refine the assessment and to establish reliability and validity to increase its value as an assessment tool for functional vision and well-being. © 2015 The Authors. Clinical and Experimental Optometry © 2015 Optometry Australia.
NASA Astrophysics Data System (ADS)
Tamura, Yoshinobu; Yamada, Shigeru
OSS (open source software) systems which serve as key components of critical infrastructures in our social life are still ever-expanding now. Especially, embedded OSS systems have been gaining a lot of attention in the embedded system area, i.e., Android, BusyBox, TRON, etc. However, the poor handling of quality problem and customer support prohibit the progress of embedded OSS. Also, it is difficult for developers to assess the reliability and portability of embedded OSS on a single-board computer. In this paper, we propose a method of software reliability assessment based on flexible hazard rates for the embedded OSS. Also, we analyze actual data of software failure-occurrence time-intervals to show numerical examples of software reliability assessment for the embedded OSS. Moreover, we compare the proposed hazard rate model for the embedded OSS with the typical conventional hazard rate models by using the comparison criteria of goodness-of-fit. Furthermore, we discuss the optimal software release problem for the porting-phase based on the total expected software maintenance cost.
The role of vision processing in prosthetic vision.
Barnes, Nick; He, Xuming; McCarthy, Chris; Horne, Lachlan; Kim, Junae; Scott, Adele; Lieby, Paulette
2012-01-01
Prosthetic vision provides vision which is reduced in resolution and dynamic range compared to normal human vision. This comes about both due to residual damage to the visual system from the condition that caused vision loss, and due to limitations of current technology. However, even with limitations, prosthetic vision may still be able to support functional performance which is sufficient for tasks which are key to restoring independent living and quality of life. Here vision processing can play a key role, ensuring that information which is critical to the performance of key tasks is available within the capability of the available prosthetic vision. In this paper, we frame vision processing for prosthetic vision, highlight some key areas which present problems in terms of quality of life, and present examples where vision processing can help achieve better outcomes.
A Project-Based Laboratory for Learning Embedded System Design with Industry Support
ERIC Educational Resources Information Center
Lee, Chyi-Shyong; Su, Juing-Huei; Lin, Kuo-En; Chang, Jia-Hao; Lin, Gu-Hong
2010-01-01
A project-based laboratory for learning embedded system design with support from industry is presented in this paper. The aim of this laboratory is to motivate students to learn the building blocks of embedded systems and practical control algorithms by constructing a line-following robot using the quadratic interpolation technique to predict the…
ERIC Educational Resources Information Center
Mattmann, C. A.; Medvidovic, N.; Malek, S.; Edwards, G.; Banerjee, S.
2012-01-01
As embedded software systems have grown in number, complexity, and importance in the modern world, a corresponding need to teach computer science students how to effectively engineer such systems has arisen. Embedded software systems, such as those that control cell phones, aircraft, and medical equipment, are subject to requirements and…
ERIC Educational Resources Information Center
Jing,Lei; Cheng, Zixue; Wang, Junbo; Zhou, Yinghui
2011-01-01
Embedded system technologies are undergoing dramatic change. Competent embedded system engineers are becoming a scarce resource in the industry. Given this, universities should revise their specialist education to meet industry demands. In this paper, a spirally tight-coupled step-by-step educational method, based on an analysis of industry…
Integrating Embedded Computing Systems into High School and Early Undergraduate Education
ERIC Educational Resources Information Center
Benson, B.; Arfaee, A.; Choon Kim; Kastner, R.; Gupta, R. K.
2011-01-01
Early exposure to embedded computing systems is crucial for students to be prepared for the embedded computing demands of today's world. However, exposure to systems knowledge often comes too late in the curriculum to stimulate students' interests and to provide a meaningful difference in how they direct their choice of electives for future…
Hierarchical Modelling Of Mobile, Seeing Robots
NASA Astrophysics Data System (ADS)
Luh, Cheng-Jye; Zeigler, Bernard P.
1990-03-01
This paper describes the implementation of a hierarchical robot simulation which supports the design of robots with vision and mobility. A seeing robot applies a classification expert system for visual identification of laboratory objects. The visual data acquisition algorithm used by the robot vision system has been developed to exploit multiple viewing distances and perspectives. Several different simulations have been run testing the visual logic in a laboratory environment. Much work remains to integrate the vision system with the rest of the robot system.
Hierarchical modelling of mobile, seeing robots
NASA Technical Reports Server (NTRS)
Luh, Cheng-Jye; Zeigler, Bernard P.
1990-01-01
This paper describes the implementation of a hierarchical robot simulation which supports the design of robots with vision and mobility. A seeing robot applies a classification expert system for visual identification of laboratory objects. The visual data acquisition algorithm used by the robot vision system has been developed to exploit multiple viewing distances and perspectives. Several different simulations have been run testing the visual logic in a laboratory environment. Much work remains to integrate the vision system with the rest of the robot system.
Enhanced/Synthetic Vision Systems - Human factors research and implications for future systems
NASA Technical Reports Server (NTRS)
Foyle, David C.; Ahumada, Albert J.; Larimer, James; Sweet, Barbara T.
1992-01-01
This paper reviews recent human factors research studies conducted in the Aerospace Human Factors Research Division at NASA Ames Research Center related to the development and usage of Enhanced or Synthetic Vision Systems. Research discussed includes studies of field of view (FOV), representational differences of infrared (IR) imagery, head-up display (HUD) symbology, HUD advanced concept designs, sensor fusion, and sensor/database fusion and evaluation. Implications for the design and usage of Enhanced or Synthetic Vision Systems are discussed.
Marking parts to aid robot vision
NASA Technical Reports Server (NTRS)
Bales, J. W.; Barker, L. K.
1981-01-01
The premarking of parts for subsequent identification by a robot vision system appears to be beneficial as an aid in the automation of certain tasks such as construction in space. A simple, color coded marking system is presented which allows a computer vision system to locate an object, calculate its orientation, and determine its identity. Such a system has the potential to operate accurately, and because the computer shape analysis problem has been simplified, it has the ability to operate in real time.
Viewpoints, Formalisms, Languages, and Tools for Cyber-Physical Systems
2014-05-16
Organization]: Special-Purpose and Application-Based Systems —real-time and embedded sys- tems; F.1.2 [Computation by Abstract Devices]: Mod- els of...domain CPS is not new. For example, early automotive embedded systems in the 1970s already combined closed-loop control of the brake and engine subsystems...Consider for example the development of an embedded control system such as an advanced driver assistance system (ADAS) (e.g., adaptive cruise control
ROBOSIGHT: Robotic Vision System For Inspection And Manipulation
NASA Astrophysics Data System (ADS)
Trivedi, Mohan M.; Chen, ChuXin; Marapane, Suresh
1989-02-01
Vision is an important sensory modality that can be used for deriving information critical to the proper, efficient, flexible, and safe operation of an intelligent robot. Vision systems are uti-lized for developing higher level interpretation of the nature of a robotic workspace using images acquired by cameras mounted on a robot. Such information can be useful for tasks such as object recognition, object location, object inspection, obstacle avoidance and navigation. In this paper we describe efforts directed towards developing a vision system useful for performing various robotic inspection and manipulation tasks. The system utilizes gray scale images and can be viewed as a model-based system. It includes general purpose image analysis modules as well as special purpose, task dependent object status recognition modules. Experiments are described to verify the robust performance of the integrated system using a robotic testbed.
Martínez-Bueso, Pau; Moyà-Alcover, Biel
2014-01-01
Observation is recommended in motor rehabilitation. For this reason, the aim of this study was to experimentally test the feasibility and benefit of including mirror feedback in vision-based rehabilitation systems: we projected the user on the screen. We conducted a user study by using a previously evaluated system that improved the balance and postural control of adults with cerebral palsy. We used a within-subjects design with the two defined feedback conditions (mirror and no-mirror) with two different groups of users (8 with disabilities and 32 without disabilities) using usability measures (time-to-start (T s) and time-to-complete (T c)). A two-tailed paired samples t-test confirmed that in case of disabilities the mirror feedback facilitated the interaction in vision-based systems for rehabilitation. The measured times were significantly worse in the absence of the user's own visual feedback (T s = 7.09 (P < 0.001) and T c = 4.48 (P < 0.005)). In vision-based interaction systems, the input device is the user's own body; therefore, it makes sense that feedback should be related to the body of the user. In case of disabilities the mirror feedback mechanisms facilitated the interaction in vision-based systems for rehabilitation. Results recommends developers and researchers use this improvement in vision-based motor rehabilitation interactive systems. PMID:25295310
Detection and Tracking of Moving Objects with Real-Time Onboard Vision System
NASA Astrophysics Data System (ADS)
Erokhin, D. Y.; Feldman, A. B.; Korepanov, S. E.
2017-05-01
Detection of moving objects in video sequence received from moving video sensor is a one of the most important problem in computer vision. The main purpose of this work is developing set of algorithms, which can detect and track moving objects in real time computer vision system. This set includes three main parts: the algorithm for estimation and compensation of geometric transformations of images, an algorithm for detection of moving objects, an algorithm to tracking of the detected objects and prediction their position. The results can be claimed to create onboard vision systems of aircraft, including those relating to small and unmanned aircraft.
NASA Technical Reports Server (NTRS)
Campbell, R. H.; Essick, Ray B.; Johnston, Gary; Kenny, Kevin; Russo, Vince
1987-01-01
Project EOS is studying the problems of building adaptable real-time embedded operating systems for the scientific missions of NASA. Choices (A Class Hierarchical Open Interface for Custom Embedded Systems) is an operating system designed and built by Project EOS to address the following specific issues: the software architecture for adaptable embedded parallel operating systems, the achievement of high-performance and real-time operation, the simplification of interprocess communications, the isolation of operating system mechanisms from one another, and the separation of mechanisms from policy decisions. Choices is written in C++ and runs on a ten processor Encore Multimax. The system is intended for use in constructing specialized computer applications and research on advanced operating system features including fault tolerance and parallelism.
An overview of computer vision
NASA Technical Reports Server (NTRS)
Gevarter, W. B.
1982-01-01
An overview of computer vision is provided. Image understanding and scene analysis are emphasized, and pertinent aspects of pattern recognition are treated. The basic approach to computer vision systems, the techniques utilized, applications, the current existing systems and state-of-the-art issues and research requirements, who is doing it and who is funding it, and future trends and expectations are reviewed.
On-board computational efficiency in real time UAV embedded terrain reconstruction
NASA Astrophysics Data System (ADS)
Partsinevelos, Panagiotis; Agadakos, Ioannis; Athanasiou, Vasilis; Papaefstathiou, Ioannis; Mertikas, Stylianos; Kyritsis, Sarantis; Tripolitsiotis, Achilles; Zervos, Panagiotis
2014-05-01
In the last few years, there is a surge of applications for object recognition, interpretation and mapping using unmanned aerial vehicles (UAV). Specifications in constructing those UAVs are highly diverse with contradictory characteristics including cost-efficiency, carrying weight, flight time, mapping precision, real time processing capabilities, etc. In this work, a hexacopter UAV is employed for near real time terrain mapping. The main challenge addressed is to retain a low cost flying platform with real time processing capabilities. The UAV weight limitation affecting the overall flight time, makes the selection of the on-board processing components particularly critical. On the other hand, surface reconstruction, as a computational demanding task, calls for a highly demanding processing unit on board. To merge these two contradicting aspects along with customized development, a System on a Chip (SoC) integrated circuit is proposed as a low-power, low-cost processor, which natively supports camera sensors and positioning and navigation systems. Modern SoCs, such as Omap3530 or Zynq, are classified as heterogeneous devices and provide a versatile platform, allowing access to both general purpose processors, such as the ARM11, as well as specialized processors, such as a digital signal processor and floating field-programmable gate array. A UAV equipped with the proposed embedded processors, allows on-board terrain reconstruction using stereo vision in near real time. Furthermore, according to the frame rate required, additional image processing may concurrently take place, such as image rectification andobject detection. Lastly, the onboard positioning and navigation (e.g., GNSS) chip may further improve the quality of the generated map. The resulting terrain maps are compared to ground truth geodetic measurements in order to access the accuracy limitations of the overall process. It is shown that with our proposed novel system,there is much potential in computational efficiency on board and in optimized time constraints.
The Efficacy of Optometric Vision Therapy.
ERIC Educational Resources Information Center
Journal of the American Optometric Association, 1988
1988-01-01
This review aims to document the efficacy and validity of vision therapy for modifying and improving vision functioning. The paper describes the essential components of the visual system and disorders which can be physiologically and clinically identified. Vision therapy is defined as a clinical approach for correcting and ameliorating the effects…
Jun Liu; Fan Zhang; Huang, He Helen
2014-01-01
Pattern recognition (PR) based on electromyographic (EMG) signals has been developed for multifunctional artificial arms for decades. However, assessment of EMG PR control for daily prosthesis use is still limited. One of the major barriers is the lack of a portable and configurable embedded system to implement the EMG PR control. This paper aimed to design an open and configurable embedded system for EMG PR implementation so that researchers can easily modify and optimize the control algorithms upon our designed platform and test the EMG PR control outside of the lab environments. The open platform was built on an open source embedded Linux Operating System running a high-performance Gumstix board. Both the hardware and software system framework were openly designed. The system was highly flexible in terms of number of inputs/outputs and calibration interfaces used. Such flexibility enabled easy integration of our embedded system with different types of commercialized or prototypic artificial arms. Thus far, our system was portable for take-home use. Additionally, compared with previously reported embedded systems for EMG PR implementation, our system demonstrated improved processing efficiency and high system precision. Our long-term goals are (1) to develop a wearable and practical EMG PR-based control for multifunctional artificial arms, and (2) to quantify the benefits of EMG PR-based control over conventional myoelectric prosthesis control in a home setting.
Dale, Naomi; Sakkalou, Elena; O'Reilly, Michelle; Springall, Clare; De Haan, Michelle; Salt, Alison
2017-07-01
To investigate how vision relates to early development by studying vision and cognition in a national cohort of 1-year-old infants with congenital disorders of the peripheral visual system and visual impairment. This was a cross-sectional observational investigation of a nationally recruited cohort of infants with 'simple' and 'complex' congenital disorders of the peripheral visual system. Entry age was 8 to 16 months. Vision level (Near Detection Scale) and non-verbal cognition (sensorimotor understanding, Reynell Zinkin Scales) were assessed. Parents completed demographic questionnaires. Of 90 infants (49 males, 41 females; mean 13mo, standard deviation [SD] 2.5mo; range 7-17mo); 25 (28%) had profound visual impairment (light perception at best) and 65 (72%) had severe visual impairment (basic 'form' vision). The Near Detection Scale correlated significantly with sensorimotor understanding developmental quotients in the 'total', 'simple', and 'complex' groups (all p<0.001). Age and vision accounted for 48% of sensorimotor understanding variance. Infants with profound visual impairment, especially in the 'complex' group with congenital disorders of the peripheral visual system with known brain involvement, showed the greatest cognitive delay. Lack of vision is associated with delayed early-object manipulative abilities and concepts; 'form' vision appeared to support early developmental advance. This paper provides baseline characteristics for cross-sectional and longitudinal follow-up investigations in progress. A methodological strength of the study was the representativeness of the cohort according to national epidemiological and population census data. © 2017 Mac Keith Press.
Deverell, Lil; Meyer, Denny; Lau, Bee Theng; Al Mahmud, Abdullah; Sukunesan, Suku; Bhowmik, Jahar; Chai, Almon; McCarthy, Chris; Zheng, Pan; Pipingas, Andrew; Islam, Fakir M Amirul
2017-01-01
Introduction Orientation and mobility (O&M) specialists assess the functional vision and O&M skills of people with mobility problems, usually relating to low vision or blindness. There are numerous O&M assessment checklists but no measures that reduce qualitative assessment data to a single comparable score suitable for assessing any O&M client, of any age or ability, in any location. Functional measures are needed internationally to align O&M assessment practices, guide referrals, profile O&M clients, plan appropriate services and evaluate outcomes from O&M programmes (eg, long cane training), assistive technology (eg, hazard sensors) and medical interventions (eg, retinal implants). This study aims to validate two new measures of functional performance vision-related outcomes in orientation and mobility (VROOM) and orientation and mobility outcomes (OMO) in the context of ordinary O&M assessments in Australia, with cultural comparisons in Malaysia, also developing phone apps and online training to streamline professional assessment practices. Methods and analysis This multiphase observational study will employ embedded mixed methods with a qualitative/quantitative priority: corating functional vision and O&M during social inquiry. Australian O&M agencies (n=15) provide the sampling frame. O&M specialists will use quota sampling to generate cross-sectional assessment data (n=400) before investigating selected cohorts in outcome studies. Cultural relevance of the VROOM and OMO tools will be investigated in Malaysia, where the tools will inform the design of assistive devices and evaluate prototypes. Exploratory and confirmatory factor analysis, Rasch modelling, cluster analysis and analysis of variance will be undertaken along with descriptive analysis of measurement data. Qualitative findings will be used to interpret VROOM and OMO scores, filter statistically significant results, warrant their generalisability and identify additional relevant constructs that could also be measured. Ethics and dissemination Ethical approval has been granted by the Human Research Ethics Committee at Swinburne University (SHR Project 2016/316). Dissemination of results will be via agency reports, journal articles and conference presentations. PMID:29273657
Deverell, Lil; Meyer, Denny; Lau, Bee Theng; Al Mahmud, Abdullah; Sukunesan, Suku; Bhowmik, Jahar; Chai, Almon; McCarthy, Chris; Zheng, Pan; Pipingas, Andrew; Islam, Fakir M Amirul
2017-12-21
Orientation and mobility (O&M) specialists assess the functional vision and O&M skills of people with mobility problems, usually relating to low vision or blindness. There are numerous O&M assessment checklists but no measures that reduce qualitative assessment data to a single comparable score suitable for assessing any O&M client, of any age or ability, in any location. Functional measures are needed internationally to align O&M assessment practices, guide referrals, profile O&M clients, plan appropriate services and evaluate outcomes from O&M programmes (eg, long cane training), assistive technology (eg, hazard sensors) and medical interventions (eg, retinal implants). This study aims to validate two new measures of functional performance vision-related outcomes in orientation and mobility (VROOM) and orientation and mobility outcomes (OMO) in the context of ordinary O&M assessments in Australia, with cultural comparisons in Malaysia, also developing phone apps and online training to streamline professional assessment practices. This multiphase observational study will employ embedded mixed methods with a qualitative/quantitative priority: corating functional vision and O&M during social inquiry. Australian O&M agencies (n=15) provide the sampling frame. O&M specialists will use quota sampling to generate cross-sectional assessment data (n=400) before investigating selected cohorts in outcome studies. Cultural relevance of the VROOM and OMO tools will be investigated in Malaysia, where the tools will inform the design of assistive devices and evaluate prototypes. Exploratory and confirmatory factor analysis, Rasch modelling, cluster analysis and analysis of variance will be undertaken along with descriptive analysis of measurement data. Qualitative findings will be used to interpret VROOM and OMO scores, filter statistically significant results, warrant their generalisability and identify additional relevant constructs that could also be measured. Ethical approval has been granted by the Human Research Ethics Committee at Swinburne University (SHR Project 2016/316). Dissemination of results will be via agency reports, journal articles and conference presentations. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Low computation vision-based navigation for a Martian rover
NASA Technical Reports Server (NTRS)
Gavin, Andrew S.; Brooks, Rodney A.
1994-01-01
Construction and design details of the Mobot Vision System, a small, self-contained, mobile vision system, are presented. This system uses the view from the top of a small, roving, robotic vehicle to supply data that is processed in real-time to safely navigate the surface of Mars. A simple, low-computation algorithm for constructing a 3-D navigational map of the Martian environment to be used by the rover is discussed.
An Integrated Calibration Technique for Stereo Vision Systems (PREPRINT)
2010-03-01
technique for stereo vision systems has been developed. To demonstrate and evaluate this calibration technique, multiple Wii Remotes (Wiimotes) from Nintendo ...from Nintendo were used to form stereo vision systems to perform 3D motion capture in real time. This integrated technique is a two-step process...Wiimotes) used in Nintendo Wii games. Many researchers have successfully dealt with the problem of camera calibration by taking images from a 2D
Software model of a machine vision system based on the common house fly.
Madsen, Robert; Barrett, Steven; Wilcox, Michael
2005-01-01
The vision system of the common house fly has many properties, such as hyperacuity and parallel structure, which would be advantageous in a machine vision system. A software model has been developed which is ultimately intended to be a tool to guide the design of an analog real time vision system. The model starts by laying out cartridges over an image. The cartridges are analogous to the ommatidium of the fly's eye and contain seven photoreceptors each with a Gaussian profile. The spacing between photoreceptors is variable providing for more or less detail as needed. The cartridges provide information on what type of features they see and neighboring cartridges share information to construct a feature map.
A dental vision system for accurate 3D tooth modeling.
Zhang, Li; Alemzadeh, K
2006-01-01
This paper describes an active vision system based reverse engineering approach to extract the three-dimensional (3D) geometric information from dental teeth and transfer this information into Computer-Aided Design/Computer-Aided Manufacture (CAD/CAM) systems to improve the accuracy of 3D teeth models and at the same time improve the quality of the construction units to help patient care. The vision system involves the development of a dental vision rig, edge detection, boundary tracing and fast & accurate 3D modeling from a sequence of sliced silhouettes of physical models. The rig is designed using engineering design methods such as a concept selection matrix and weighted objectives evaluation chart. Reconstruction results and accuracy evaluation are presented on digitizing different teeth models.
Tunnel vision: sharper gradient of spatial attention in autism.
Robertson, Caroline E; Kravitz, Dwight J; Freyberg, Jan; Baron-Cohen, Simon; Baker, Chris I
2013-04-17
Enhanced perception of detail has long been regarded a hallmark of autism spectrum conditions (ASC), but its origins are unknown. Normal sensitivity on all fundamental perceptual measures-visual acuity, contrast discrimination, and flicker detection-is strongly established in the literature. If individuals with ASC do not have superior low-level vision, how is perception of detail enhanced? We argue that this apparent paradox can be resolved by considering visual attention, which is known to enhance basic visual sensitivity, resulting in greater acuity and lower contrast thresholds. Here, we demonstrate that the focus of attention and concomitant enhancement of perception are sharper in human individuals with ASC than in matched controls. Using a simple visual acuity task embedded in a standard cueing paradigm, we mapped the spatial and temporal gradients of attentional enhancement by varying the distance and onset time of visual targets relative to an exogenous cue, which obligatorily captures attention. Individuals with ASC demonstrated a greater fall-off in performance with distance from the cue than controls, indicating a sharper spatial gradient of attention. Further, this sharpness was highly correlated with the severity of autistic symptoms in ASC, as well as autistic traits across both ASC and control groups. These findings establish the presence of a form of "tunnel vision" in ASC, with far-reaching implications for our understanding of the social and neurobiological aspects of autism.
Erskine, Jonathan; Hunter, David J; Small, Adrian; Hicks, Chris; McGovern, Tom; Lugsden, Ed; Whitty, Paula; Steen, Nick; Eccles, Martin Paul
2013-02-01
The research project 'An Evaluation of Transformational Change in NHS North East' examines the progress and success of National Health Service (NHS) organisations in north east England in implementing and embedding the North East Transformation System (NETS), a region-wide programme to improve healthcare quality and safety, and to reduce waste, using a combination of Vision, Compact, and Lean-based Method. This paper concentrates on findings concerning the role of leadership in enabling tranformational change, based on semi-structured interviews with a mix of senior NHS managers and quality improvement staff in 14 study sites. Most interviewees felt that implementing the NETS requires committed, stable leadership, attention to team-building across disciplines and leadership development at many levels. We conclude that without senior leader commitment to continuous improvement over a long time scale and serious efforts to distribute leadership tasks to all levels, healthcare organisations are less likely to achieve positive changes in managerial-clinical relations, sustainable improvements to organisational culture and, ultimately, the region-wide step change in quality, safety and efficiency that the NETS was designed to deliver. © The Author(s) 2013 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Remote Marker-Based Tracking for UAV Landing Using Visible-Light Camera Sensor
Nguyen, Phong Ha; Kim, Ki Wan; Lee, Young Won; Park, Kang Ryoung
2017-01-01
Unmanned aerial vehicles (UAVs), which are commonly known as drones, have proved to be useful not only on the battlefields where manned flight is considered too risky or difficult, but also in everyday life purposes such as surveillance, monitoring, rescue, unmanned cargo, aerial video, and photography. More advanced drones make use of global positioning system (GPS) receivers during the navigation and control loop which allows for smart GPS features of drone navigation. However, there are problems if the drones operate in heterogeneous areas with no GPS signal, so it is important to perform research into the development of UAVs with autonomous navigation and landing guidance using computer vision. In this research, we determined how to safely land a drone in the absence of GPS signals using our remote maker-based tracking algorithm based on the visible light camera sensor. The proposed method uses a unique marker designed as a tracking target during landing procedures. Experimental results show that our method significantly outperforms state-of-the-art object trackers in terms of both accuracy and processing time, and we perform test on an embedded system in various environments. PMID:28867775
A hardware-in-the-loop simulation program for ground-based radar
NASA Astrophysics Data System (ADS)
Lam, Eric P.; Black, Dennis W.; Ebisu, Jason S.; Magallon, Julianna
2011-06-01
A radar system created using an embedded computer system needs testing. The way to test an embedded computer system is different from the debugging approaches used on desktop computers. One way to test a radar system is to feed it artificial inputs and analyze the outputs of the radar. More often, not all of the building blocks of the radar system are available to test. This will require the engineer to test parts of the radar system using a "black box" approach. A common way to test software code on a desktop simulation is to use breakpoints so that is pauses after each cycle through its calculations. The outputs are compared against the values that are expected. This requires the engineer to use valid test scenarios. We will present a hardware-in-the-loop simulator that allows the embedded system to think it is operating with real-world inputs and outputs. From the embedded system's point of view, it is operating in real-time. The hardware in the loop simulation is based on our Desktop PC Simulation (PCS) testbed. In the past, PCS was used for ground-based radars. This embedded simulation, called Embedded PCS, allows a rapid simulated evaluation of ground-based radar performance in a laboratory environment.
Health System Vision of Iran in 2025
Rostamigooran, N; Esmailzadeh, H; Rajabi, F; Majdzadeh, R; Larijani, B; Dastgerdi, M Vahid
2013-01-01
Background: Vast changes in disease features and risk factors and influence of demographic, economical, and social trends on health system, makes formulating a long term evolutionary plan, unavoidable. In this regard, to determine health system vision in a long term horizon is a primary stage. Method: After narrative and purposeful review of documentaries, major themes of vision statement were determined and its context was organized in a work group consist of selected managers and experts of health system. Final content of the statement was prepared after several sessions of group discussions and receiving ideas of policy makers and experts of health system. Results: Vision statement in evolutionary plan of health system is considered to be :“a progressive community in the course of human prosperity which has attained to a developed level of health standards in the light of the most efficient and equitable health system in visionary region1 and with the regarding to health in all policies, accountability and innovation”. An explanatory context was compiled either to create a complete image of the vision. Conclusion: Social values and leaders’ strategic goals, and also main orientations are generally mentioned in vision statement. In this statement prosperity and justice are considered as major values and ideals in society of Iran; development and excellence in the region as leaders’ strategic goals; and also considering efficiency and equality, health in all policies, and accountability and innovation as main orientations of health system. PMID:23865011
Results of SEI Independent Research and Development Projects
2009-12-01
Achieving Predictable Performance in Multicore Embedded Real - Time Systems Dionisio de Niz, Jeffrey Hansen, Gabriel Moreno, Daniel Plakosh, Jorgen Hanson...Description Languages.‖ Fourth Congress on Embedded Real - Time Systems (ERTS), January 2008. [Hansson 2008b] J. Hansson, P. H. Feiler, & J. Morley...Predictable Performance in Multicore Embedded Real - Time Systems Dionisio de Niz, Jeffrey Hansen, Gabriel Moreno, Daniel Plakosh, Jorgen Hanson, Mark
Embedded object concept: case balancing two-wheeled robot
NASA Astrophysics Data System (ADS)
Vallius, Tero; Röning, Juha
2007-09-01
This paper presents the Embedded Object Concept (EOC) and a telepresence robot system which is a test case for the EOC. The EOC utilizes common object-oriented methods used in software by applying them to combined Lego-like software-hardware entities. These entities represent objects in object-oriented design methods, and they are the building blocks of embedded systems. The goal of the EOC is to make the designing of embedded systems faster and easier. This concept enables people without comprehensive knowledge in electronics design to create new embedded systems, and for experts it shortens the design time of new embedded systems. We present the current status of a telepresence robot created with Atomi-objects, which is the name for our implementation of the embedded objects. The telepresence robot is a relatively complex test case for the EOC. The robot has been constructed using incremental device development, which is made possible by the architecture of the EOC. The robot contains video and audio exchange capability and a controlling system for driving with two wheels. The robot consists of Atomi-objects, demonstrating the suitability of the EOC for prototyping and easy modifications, and proving the capabilities of the EOC by realizing a function that normally requires a computer. The computer counterpart is a regular PC with audio and video capabilities running with a robot control application. The robot is functional and successfully tested.
Reimer, Bryan; Mehler, Bruce; Reagan, Ian; Kidd, David; Dobres, Jonathan
2016-01-01
Abstract There is limited research on trade-offs in demand between manual and voice interfaces of embedded and portable technologies. Mehler et al. identified differences in driving performance, visual engagement and workload between two contrasting embedded vehicle system designs (Chevrolet MyLink and Volvo Sensus). The current study extends this work by comparing these embedded systems with a smartphone (Samsung Galaxy S4). None of the voice interfaces eliminated visual demand. Relative to placing calls manually, both embedded voice interfaces resulted in less eyes-off-road time than the smartphone. Errors were most frequent when calling contacts using the smartphone. The smartphone and MyLink allowed addresses to be entered using compound voice commands resulting in shorter eyes-off-road time compared with the menu-based Sensus but with many more errors. Driving performance and physiological measures indicated increased demand when performing secondary tasks relative to ‘just driving’, but were not significantly different between the smartphone and embedded systems. Practitioner Summary: The findings show that embedded system and portable device voice interfaces place fewer visual demands on the driver than manual interfaces, but they also underscore how differences in system designs can significantly affect not only the demands placed on drivers, but also the successful completion of tasks. PMID:27110964
NASA Technical Reports Server (NTRS)
2005-01-01
The Transformational Concept of Operations (CONOPS) provides a long-term, sustainable vision for future U.S. space transportation infrastructure and operations. This vision presents an interagency concept, developed cooperatively by the Department of Defense (DoD), the Federal Aviation Administration (FAA), and the National Aeronautics and Space Administration (NASA) for the upgrade, integration, and improved operation of major infrastructure elements of the nation s space access systems. The interagency vision described in the Transformational CONOPS would transform today s space launch infrastructure into a shared system that supports worldwide operations for a variety of users. The system concept is sufficiently flexible and adaptable to support new types of missions for exploration, commercial enterprise, and national security, as well as to endure further into the future when space transportation technology may be sufficiently advanced to enable routine public space travel as part of the global transportation system. The vision for future space transportation operations is based on a system-of-systems architecture that integrates the major elements of the future space transportation system - transportation nodes (spaceports), flight vehicles and payloads, tracking and communications assets, and flight traffic coordination centers - into a transportation network that concurrently accommodates multiple types of mission operators, payloads, and vehicle fleets. This system concept also establishes a common framework for defining a detailed CONOPS for the major elements of the future space transportation system. The resulting set of four CONOPS (see Figure 1 below) describes the common vision for a shared future space transportation system (FSTS) infrastructure from a variety of perspectives.
NASA Technical Reports Server (NTRS)
Gennery, D.; Cunningham, R.; Saund, E.; High, J.; Ruoff, C.
1981-01-01
The field of computer vision is surveyed and assessed, key research issues are identified, and possibilities for a future vision system are discussed. The problems of descriptions of two and three dimensional worlds are discussed. The representation of such features as texture, edges, curves, and corners are detailed. Recognition methods are described in which cross correlation coefficients are maximized or numerical values for a set of features are measured. Object tracking is discussed in terms of the robust matching algorithms that must be devised. Stereo vision, camera control and calibration, and the hardware and systems architecture are discussed.
The genetics of normal and defective color vision
Neitz, Jay; Neitz, Maureen
2011-01-01
The contributions of genetics research to the science of normal and defective color vision over the previous few decades are reviewed emphasizing the developments in the 25 years since the last anniversary issue of Vision Research. Understanding of the biology underlying color vision has been vaulted forward through the application of the tools of molecular genetics. For all their complexity, the biological processes responsible for color vision are more accessible than for many other neural systems. This is partly because of the wealth of genetic variations that affect color perception, both within and across species, and because components of the color vision system lend themselves to genetic manipulation. Mutations and rearrangements in the genes encoding the long, middle, and short wavelength sensitive cone pigments are responsible for color vision deficiencies and mutations have been identified that affect the number of cone types, the absorption spectrum of the pigments, the functionality and viability of the cones, and the topography of the cone mosaic. The addition of an opsin gene, as occurred in the evolution of primate color vision, and has been done in experimental animals can produce expanded color vision capacities and this has provided insight into the underlying neural circuitry. PMID:21167193
NASA Astrophysics Data System (ADS)
Kuvychko, Igor
2001-10-01
Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, that is an interpretation of visual information in terms of such knowledge models. A computer vision system based on such principles requires unifying representation of perceptual and conceptual information. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/networks models is found. That means a very important shift of paradigm in our knowledge about brain from neural networks to the cortical software. Starting from the primary visual areas, brain analyzes an image as a graph-type spatial structure. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. The spatial combination of different neighbor features cannot be described as a statistical/integral characteristic of the analyzed region, but uniquely characterizes such region itself. Spatial logic and topology naturally present in such structures. Mid-level vision processes like clustering, perceptual grouping, multilevel hierarchical compression, separation of figure from ground, etc. are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena like shape from shading, occlusion, etc. are results of such analysis. Such approach gives opportunity not only to explain frequently unexplainable results of the cognitive science, but also to create intelligent computer vision systems that simulate perceptional processes in both what and where visual pathways. Such systems can open new horizons for robotic and computer vision industries.
Machine vision system for online inspection of freshly slaughtered chickens
USDA-ARS?s Scientific Manuscript database
A machine vision system was developed and evaluated for the automation of online inspection to differentiate freshly slaughtered wholesome chickens from systemically diseased chickens. The system consisted of an electron-multiplying charge-coupled-device camera used with an imaging spectrograph and ...
Embedding methods for the steady Euler equations
NASA Technical Reports Server (NTRS)
Chang, S. H.; Johnson, G. M.
1983-01-01
An approach to the numerical solution of the steady Euler equations is to embed the first-order Euler system in a second-order system and then to recapture the original solution by imposing additional boundary conditions. Initial development of this approach and computational experimentation with it were previously based on heuristic physical reasoning. This has led to the construction of a relaxation procedure for the solution of two-dimensional steady flow problems. The theoretical justification for the embedding approach is addressed. It is proven that, with the appropriate choice of embedding operator and additional boundary conditions, the solution to the embedded system is exactly the one to the original Euler equations. Hence, solving the embedded version of the Euler equations will not produce extraneous solutions.
ETHERNET BASED EMBEDDED SYSTEM FOR FEL DIAGNOSTICS AND CONTROLS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jianxun Yan; Daniel Sexton; Steven Moore
2006-10-24
An Ethernet based embedded system has been developed to upgrade the Beam Viewer and Beam Position Monitor (BPM) systems within the free-electron laser (FEL) project at Jefferson Lab. The embedded microcontroller was mounted on the front-end I/O cards with software packages such as Experimental Physics and Industrial Control System (EPICS) and Real Time Executive for Multiprocessor System (RTEMS) running as an Input/Output Controller (IOC). By cross compiling with the EPICS, the RTEMS kernel, IOC device supports, and databases all of these can be downloaded into the microcontroller. The first version of the BPM electronics based on the embedded controller wasmore » built and is currently running in our FEL system. The new version of BPM that will use a Single Board IOC (SBIOC), which integrates with an Field Programming Gate Array (FPGA) and a ColdFire embedded microcontroller, is presently under development. The new system has the features of a low cost IOC, an open source real-time operating system, plug&play-like ease of installation and flexibility, and provides a much more localized solution.« less
Trauma-Informed Part C Early Intervention: A Vision, A Challenge, A New Reality
ERIC Educational Resources Information Center
Gilkerson, Linda; Graham, Mimi; Harris, Deborah; Oser, Cindy; Clarke, Jane; Hairston-Fuller, Tody C.; Lertora, Jessica
2013-01-01
Federal directives require that any child less than 3 years old with a substantiated case of abuse be referred to the early intervention (EI) system. This article details the need and presents a vision for a trauma-informed EI system. The authors describe two exemplary program models which implement this vision and recommend steps which the field…
An embedded system developed for hand held assay used in water monitoring
NASA Astrophysics Data System (ADS)
Wu, Lin; Wang, Jianwei; Ramakrishna, Bharath; Hsueh, Mingkai; Liu, Jonathan; Wu, Qufei; Wu, Chao-Cheng; Cao, Mang; Chang, Chein-I.; Jensen, Janet L.; Jensen, James O.; Knapp, Harlan; Daniel, Robert; Yin, Ray
2005-11-01
The US Army Joint Service Agent Water Monitor (JSAWM) program is currently interested in an approach that can implement a hardware- designed device in ticket-based hand-held assay (currently being developed) used for chemical/biological agent detection. This paper presents a preliminary investigation of the proof of concept. Three components are envisioned to accomplish the task. One is the ticket development which has been undertaken by the ANP, Inc. Another component is the software development which has been carried out by the Remote Sensing Signal and Image Processing Laboratory (RSSIPL) at the University of Maryland, Baltimore County (UMBC). A third component is an embedded system development which can be used to drive the UMBC-developed software to analyze the ANP-developed HHA tickets on a small pocket-size device like a PDA. The main focus of this paper is to investigate the third component that is viable and is yet to be explored. In order to facilitate to prove the concept, a flatbed scanner is used to replace a ticket reader to serve as an input device. The Stargate processor board is used as the embedded System with Embedded Linux installed. It is connected to an input device such as scanner as well as output devices such as LCD display or laptop etc. It executes the C-Coded processing program developed for this embedded system and outputs its findings on a display device. The embedded system to be developed and investigated in this paper is the core of a future hardware device. Several issues arising in such an embedded system will be addressed. Finally, the proof-of-concept pilot embedded system will be demonstrated.
Vision Based Autonomous Robotic Control for Advanced Inspection and Repair
NASA Technical Reports Server (NTRS)
Wehner, Walter S.
2014-01-01
The advanced inspection system is an autonomous control and analysis system that improves the inspection and remediation operations for ground and surface systems. It uses optical imaging technology with intelligent computer vision algorithms to analyze physical features of the real-world environment to make decisions and learn from experience. The advanced inspection system plans to control a robotic manipulator arm, an unmanned ground vehicle and cameras remotely, automatically and autonomously. There are many computer vision, image processing and machine learning techniques available as open source for using vision as a sensory feedback in decision-making and autonomous robotic movement. My responsibilities for the advanced inspection system are to create a software architecture that integrates and provides a framework for all the different subsystem components; identify open-source algorithms and techniques; and integrate robot hardware.
Parandekar, Priya V; Hratchian, Hrant P; Raghavachari, Krishnan
2008-10-14
Hybrid QM:QM (quantum mechanics:quantum mechanics) and QM:MM (quantum mechanics:molecular mechanics) methods are widely used to calculate the electronic structure of large systems where a full quantum mechanical treatment at a desired high level of theory is computationally prohibitive. The ONIOM (our own N-layer integrated molecular orbital molecular mechanics) approximation is one of the more popular hybrid methods, where the total molecular system is divided into multiple layers, each treated at a different level of theory. In a previous publication, we developed a novel QM:QM electronic embedding scheme within the ONIOM framework, where the model system is embedded in the external Mulliken point charges of the surrounding low-level region to account for the polarization of the model system wave function. Therein, we derived and implemented a rigorous expression for the embedding energy as well as analytic gradients that depend on the derivatives of the external Mulliken point charges. In this work, we demonstrate the applicability of our QM:QM method with point charge embedding and assess its accuracy. We study two challenging systems--zinc metalloenzymes and silicon oxide cages--and demonstrate that electronic embedding shows significant improvement over mechanical embedding. We also develop a modified technique for the energy and analytic gradients using a generalized asymmetric Mulliken embedding method involving an unequal splitting of the Mulliken overlap populations to offer improvement in situations where the Mulliken charges may be deficient.
Landmark navigation and autonomous landing approach with obstacle detection for aircraft
NASA Astrophysics Data System (ADS)
Fuerst, Simon; Werner, Stefan; Dickmanns, Dirk; Dickmanns, Ernst D.
1997-06-01
A machine perception system for aircraft and helicopters using multiple sensor data for state estimation is presented. By combining conventional aircraft sensor like gyros, accelerometers, artificial horizon, aerodynamic measuring devices and GPS with vision data taken by conventional CCD-cameras mounted on a pan and tilt platform, the position of the craft can be determined as well as the relative position to runways and natural landmarks. The vision data of natural landmarks are used to improve position estimates during autonomous missions. A built-in landmark management module decides which landmark should be focused on by the vision system, depending on the distance to the landmark and the aspect conditions. More complex landmarks like runways are modeled with different levels of detail that are activated dependent on range. A supervisor process compares vision data and GPS data to detect mistracking of the vision system e.g. due to poor visibility and tries to reinitialize the vision system or to set focus on another landmark available. During landing approach obstacles like trucks and airplanes can be detected on the runway. The system has been tested in real-time within a hardware-in-the-loop simulation. Simulated aircraft measurements corrupted by noise and other characteristic sensor errors have been fed into the machine perception system; the image processing module for relative state estimation was driven by computer generated imagery. Results from real-time simulation runs are given.
Flight Testing an Integrated Synthetic Vision System
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Arthur, Jarvis J., III; Bailey, Randall E.; Prinzel, Lawrence J., III
2005-01-01
NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications to eliminate low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance for transport aircraft. The SVS concept being developed at NASA encompasses the integration of tactical and strategic Synthetic Vision Display Concepts (SVDC) with Runway Incursion Prevention System (RIPS) alerting and display concepts, real-time terrain database integrity monitoring equipment (DIME), and Enhanced Vision Systems (EVS) and/or improved Weather Radar for real-time object detection and database integrity monitoring. A flight test evaluation was jointly conducted (in July and August 2004) by NASA Langley Research Center and an industry partner team under NASA's Aviation Safety and Security, Synthetic Vision System project. A Gulfstream GV aircraft was flown over a 3-week period in the Reno/Tahoe International Airport (NV) local area and an additional 3-week period in the Wallops Flight Facility (VA) local area to evaluate integrated Synthetic Vision System concepts. The enabling technologies (RIPS, EVS and DIME) were integrated into the larger SVS concept design. This paper presents experimental methods and the high level results of this flight test.
Embedded ubiquitous services on hospital information systems.
Kuroda, Tomohiro; Sasaki, Hiroshi; Suenaga, Takatoshi; Masuda, Yasushi; Yasumuro, Yoshihiro; Hori, Kenta; Ohboshi, Naoki; Takemura, Tadamasa; Chihara, Kunihiro; Yoshihara, Hiroyuki
2012-11-01
A Hospital Information Systems (HIS) have turned a hospital into a gigantic computer with huge computational power, huge storage and wired/wireless local area network. On the other hand, a modern medical device, such as echograph, is a computer system with several functional units connected by an internal network named a bus. Therefore, we can embed such a medical device into the HIS by simply replacing the bus with the local area network. This paper designed and developed two embedded systems, a ubiquitous echograph system and a networked digital camera. Evaluations of the developed systems clearly show that the proposed approach, embedding existing clinical systems into HIS, drastically changes productivity in the clinical field. Once a clinical system becomes a pluggable unit for a gigantic computer system, HIS, the combination of multiple embedded systems with application software designed under deep consideration about clinical processes may lead to the emergence of disruptive innovation in the clinical field.
Neuromorphic vision sensors and preprocessors in system applications
NASA Astrophysics Data System (ADS)
Kramer, Joerg; Indiveri, Giacomo
1998-09-01
A partial review of neuromorphic vision sensors that are suitable for use in autonomous systems is presented. Interfaces are being developed to multiplex the high- dimensional output signals of arrays of such sensors and to communicate them in standard formats to off-chip devices for higher-level processing, actuation, storage and display. Alternatively, on-chip processing stages may be implemented to extract sparse image parameters, thereby obviating the need for multiplexing. Autonomous robots are used to test neuromorphic vision chips in real-world environments and to explore the possibilities of data fusion from different sensing modalities. Examples of autonomous mobile systems that use neuromorphic vision chips for line tracking and optical flow matching are described.
Enhanced Vision for All-Weather Operations Under NextGen
NASA Technical Reports Server (NTRS)
Bailey, Randall E.; Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Kramer, Lynda J.; Williams, Steven P.
2010-01-01
Recent research in Synthetic/Enhanced Vision technology is analyzed with respect to existing Category II/III performance and certification guidance. The goal is to start the development of performance-based vision systems technology requirements to support future all-weather operations and the NextGen goal of Equivalent Visual Operations. This work shows that existing criteria to operate in Category III weather and visibility are not directly applicable since, unlike today, the primary reference for maneuvering the airplane is based on what the pilot sees visually through the "vision system." New criteria are consequently needed. Several possible criteria are discussed, but more importantly, the factors associated with landing system performance using automatic and manual landings are delineated.
NASA Astrophysics Data System (ADS)
Wiedermann, Marc; Donges, Jonathan F.; Kurths, Jürgen; Donner, Reik V.
2016-04-01
Networks with nodes embedded in a metric space have gained increasing interest in recent years. The effects of spatial embedding on the networks' structural characteristics, however, are rarely taken into account when studying their macroscopic properties. Here, we propose a hierarchy of null models to generate random surrogates from a given spatially embedded network that can preserve certain global and local statistics associated with the nodes' embedding in a metric space. Comparing the original network's and the resulting surrogates' global characteristics allows one to quantify to what extent these characteristics are already predetermined by the spatial embedding of the nodes and links. We apply our framework to various real-world spatial networks and show that the proposed models capture macroscopic properties of the networks under study much better than standard random network models that do not account for the nodes' spatial embedding. Depending on the actual performance of the proposed null models, the networks are categorized into different classes. Since many real-world complex networks are in fact spatial networks, the proposed approach is relevant for disentangling the underlying complex system structure from spatial embedding of nodes in many fields, ranging from social systems over infrastructure and neurophysiology to climatology.
A Vision for Systems Engineering Applied to Wind Energy (Presentation)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Felker, F.; Dykes, K.
2015-01-01
This presentation was given at the Third Wind Energy Systems Engineering Workshop on January 14, 2015. Topics covered include the importance of systems engineering, a vision for systems engineering as applied to wind energy, and application of systems engineering approaches to wind energy research and development.
NASA Technical Reports Server (NTRS)
1990-01-01
Biofeedtrac, Inc.'s Accommotrac Vision Trainer, invented by Dr. Joseph Trachtman, is based on vision research performed by Ames Research Center and a special optometer developed for the Ames program by Stanford Research Institute. In the United States, about 150 million people are myopes (nearsighted), who tend to overfocus when they look at distant objects causing blurry distant vision, or hyperopes (farsighted), whose vision blurs when they look at close objects because they tend to underfocus. The Accommotrac system is an optical/electronic system used by a doctor as an aid in teaching a patient how to contract and relax the ciliary body, the focusing muscle. The key is biofeedback, wherein the patient learns to control a bodily process or function he is not normally aware of. Trachtman claims a 90 percent success rate for correcting, improving or stopping focusing problems. The Vision Trainer has also proved effective in treating other eye problems such as eye oscillation, cross eyes, and lazy eye and in professional sports to improve athletes' peripheral vision and reaction time.
NASA Technical Reports Server (NTRS)
Crouch, Roger
2004-01-01
Viewgraphs on NASA's transition to its vision for space exploration is presented. The topics include: 1) Strategic Directives Guiding the Human Support Technology Program; 2) Progressive Capabilities; 3) A Journey to Inspire, Innovate, and Discover; 4) Risk Mitigation Status Technology Readiness Level (TRL) and Countermeasures Readiness Level (CRL); 5) Biological And Physical Research Enterprise Aligning With The Vision For U.S. Space Exploration; 6) Critical Path Roadmap Reference Missions; 7) Rating Risks; 8) Current Critical Path Roadmap (Draft) Rating Risks: Human Health; 9) Current Critical Path Roadmap (Draft) Rating Risks: System Performance/Efficiency; 10) Biological And Physical Research Enterprise Efforts to Align With Vision For U.S. Space Exploration; 11) Aligning with the Vision: Exploration Research Areas of Emphasis; 12) Code U Efforts To Align With The Vision For U.S. Space Exploration; 13) Types of Critical Path Roadmap Risks; and 14) ISS Human Support Systems Research, Development, and Demonstration. A summary discussing the vision for U.S. space exploration is also provided.
Homework system development with the intention of supporting Saudi Arabia's vision 2030
NASA Astrophysics Data System (ADS)
Elgimari, Atifa; Alshahrani, Shafya; Al-shehri, Amal
2017-10-01
This paper suggests a web-based homework system. The suggested homework system can serve targeted students with ages of 7-11 years old. By using the suggested homework system, hard copies of homeworks were replaced by soft copies. Parents were involved in the education process electronically. It is expected to participate in applying Saudi Arabia's Vision 2030, specially in the education sector, where it considers the primary education is its foundation stone, as the success of the Vision depends in large assess on reforms in the education system generating a better basis for employment of young Saudis.
Object tracking with stereo vision
NASA Technical Reports Server (NTRS)
Huber, Eric
1994-01-01
A real-time active stereo vision system incorporating gaze control and task directed vision is described. Emphasis is placed on object tracking and object size and shape determination. Techniques include motion-centroid tracking, depth tracking, and contour tracking.
Transplant Image Processing Technology under Windows into the Platform Based on MiniGUI
NASA Astrophysics Data System (ADS)
Gan, Lan; Zhang, Xu; Lv, Wenya; Yu, Jia
MFC has a large number of digital image processing-related API functions, object-oriented and class mechanisms which provides image processing technology strong support in Windows. But in embedded systems, image processing technology dues to the restrictions of hardware and software do not have the environment of MFC in Windows. Therefore, this paper draws on the experience of image processing technology of Windows and transplants it into MiniGUI embedded systems. The results show that MiniGUI/Embedded graphical user interface applications about image processing which used in embedded image processing system can be good results.
Robust Pedestrian Classification Based on Hierarchical Kernel Sparse Representation.
Sun, Rui; Zhang, Guanghai; Yan, Xiaoxing; Gao, Jun
2016-08-16
Vision-based pedestrian detection has become an active topic in computer vision and autonomous vehicles. It aims at detecting pedestrians appearing ahead of the vehicle using a camera so that autonomous vehicles can assess the danger and take action. Due to varied illumination and appearance, complex background and occlusion pedestrian detection in outdoor environments is a difficult problem. In this paper, we propose a novel hierarchical feature extraction and weighted kernel sparse representation model for pedestrian classification. Initially, hierarchical feature extraction based on a CENTRIST descriptor is used to capture discriminative structures. A max pooling operation is used to enhance the invariance of varying appearance. Then, a kernel sparse representation model is proposed to fully exploit the discrimination information embedded in the hierarchical local features, and a Gaussian weight function as the measure to effectively handle the occlusion in pedestrian images. Extensive experiments are conducted on benchmark databases, including INRIA, Daimler, an artificially generated dataset and a real occluded dataset, demonstrating the more robust performance of the proposed method compared to state-of-the-art pedestrian classification methods.
Robust Pedestrian Classification Based on Hierarchical Kernel Sparse Representation
Sun, Rui; Zhang, Guanghai; Yan, Xiaoxing; Gao, Jun
2016-01-01
Vision-based pedestrian detection has become an active topic in computer vision and autonomous vehicles. It aims at detecting pedestrians appearing ahead of the vehicle using a camera so that autonomous vehicles can assess the danger and take action. Due to varied illumination and appearance, complex background and occlusion pedestrian detection in outdoor environments is a difficult problem. In this paper, we propose a novel hierarchical feature extraction and weighted kernel sparse representation model for pedestrian classification. Initially, hierarchical feature extraction based on a CENTRIST descriptor is used to capture discriminative structures. A max pooling operation is used to enhance the invariance of varying appearance. Then, a kernel sparse representation model is proposed to fully exploit the discrimination information embedded in the hierarchical local features, and a Gaussian weight function as the measure to effectively handle the occlusion in pedestrian images. Extensive experiments are conducted on benchmark databases, including INRIA, Daimler, an artificially generated dataset and a real occluded dataset, demonstrating the more robust performance of the proposed method compared to state-of-the-art pedestrian classification methods. PMID:27537888
NASA Astrophysics Data System (ADS)
Li, Chengqi; Ren, Zhigang; Yang, Bo; An, Qinghao; Yu, Xiangru; Li, Jinping
2017-12-01
In the process of dismounting and assembling the drop switch for the high-voltage electric power live line working (EPL2W) robot, one of the key problems is the precision of positioning for manipulators, gripper and the bolts used to fix drop switch. To solve it, we study the binocular vision system theory of the robot and the characteristic of dismounting and assembling drop switch. We propose a coarse-to-fine image registration algorithm based on image correlation, which can improve the positioning precision of manipulators and bolt significantly. The algorithm performs the following three steps: firstly, the target points are marked respectively in the right and left visions, and then the system judges whether the target point in right vision can satisfy the lowest registration accuracy by using the similarity of target points' backgrounds in right and left visions, this is a typical coarse-to-fine strategy; secondly, the system calculates the epipolar line, and then the regional sequence existing matching points is generated according to neighborhood of epipolar line, the optimal matching image is confirmed by calculating the similarity between template image in left vision and the region in regional sequence according to correlation matching; finally, the precise coordinates of target points in right and left visions are calculated according to the optimal matching image. The experiment results indicate that the positioning accuracy of image coordinate is within 2 pixels, the positioning accuracy in the world coordinate system is within 3 mm, the positioning accuracy of binocular vision satisfies the requirement dismounting and assembling the drop switch.
NASA Astrophysics Data System (ADS)
Nemaungani, Takalani
2015-01-01
The vision for astronomy in Africa is embedded in the African Space Policy of the African Union in early 2014. The vision is about positioning Africa as an emerging hub for astronomy sciences and facilities. Africa recognized the need to take advantage of its natural resource, the geographical advantage of the clear southern skies and pristine sites for astronomy. The Pan African University (PAU) initiative also presents an opportunity as a post-graduate training and research network of university nodes in five regions of Africa and supported by the African Union. The Southern African node based in South Africa concentrates on space sciences which also includes astronomy. The PAU aims to provide the opportunity for advanced graduate training and postgraduate research to high-performing African students. Objectives also include promoting mobility of students and teachers and harmonizing programs and degrees.A number of astronomy initiatives have burgeoned in the Southern African region and these include the Southern Africa Largest Optical Telescope (SALT), HESS (High Energy Stereoscopic System), the SKA (Square Kilometre Array) and the AVN (African Very Long Baseline Interferometer Network). There is a growing appetite for astronomy sciences in Africa. In East Africa, the astronomy community is well organized and is growing - the East African Astronomical society (EAAS) held its successful fourth annual conference since 2010 on 30 June to 04 July 2014 at the University of Rwanda. Centred around the 'Role of Astronomy in Socio-Economic Transformation,' this conference aimed at strengthening capacity building in Astronomy, Astrophysics and Space Science in general, while providing a forum for astronomers from the region to train young and upcoming scientists.
NASA Astrophysics Data System (ADS)
Min, Jae-Hong; Gelo, Nikolas J.; Jo, Hongki
2016-04-01
The newly developed smartphone application, named RINO, in this study allows measuring absolute dynamic displacements and processing them in real time using state-of-the-art smartphone technologies, such as high-performance graphics processing unit (GPU), in addition to already powerful CPU and memories, embedded high-speed/ resolution camera, and open-source computer vision libraries. A carefully designed color-patterned target and user-adjustable crop filter enable accurate and fast image processing, allowing up to 240fps for complete displacement calculation and real-time display. The performances of the developed smartphone application are experimentally validated, showing comparable accuracy with those of conventional laser displacement sensor.
Middleware Architecture for Ambient Intelligence in the Networked Home
NASA Astrophysics Data System (ADS)
Georgantas, Nikolaos; Issarny, Valerie; Mokhtar, Sonia Ben; Bromberg, Yerom-David; Bianco, Sebastien; Thomson, Graham; Raverdy, Pierre-Guillaume; Urbieta, Aitor; Cardoso, Roberto Speicys
With computing and communication capabilities now embedded in most physical objects of the surrounding environment and most users carrying wireless computing devices, the Ambient Intelligence (AmI) / pervasive computing vision [28] pioneered by Mark Weiser [32] is becoming a reality. Devices carried by nomadic users can seamlessly network with a variety of devices, both stationary and mobile, both nearby and remote, providing a wide range of functional capabilities, from base sensing and actuating to rich applications (e.g., smart spaces). This then allows the dynamic deployment of pervasive applications, which dynamically compose functional capabilities accessible in the pervasive network at the given time and place of an application request.
The Ocean Sampling Day Consortium
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kopf, Anna; Bicak, Mesude; Kottmann, Renzo
In this study, Ocean Sampling Day was initiated by the EU-funded Micro B3 (Marine Microbial Biodiversity, Bioinformatics, Biotechnology) project to obtain a snapshot of the marine microbial biodiversity and function of the world’s oceans. It is a simultaneous global mega-sequencing campaign aiming to generate the largest standardized microbial data set in a single day. This will be achievable only through the coordinated efforts of an Ocean Sampling Day Consortium, supportive partnerships and networks between sites. This commentary outlines the establishment, function and aims of the Consortium and describes our vision for a sustainable study of marine microbial communities and theirmore » embedded functional traits.« less
The Ocean Sampling Day Consortium
Kopf, Anna; Bicak, Mesude; Kottmann, Renzo; ...
2015-06-19
In this study, Ocean Sampling Day was initiated by the EU-funded Micro B3 (Marine Microbial Biodiversity, Bioinformatics, Biotechnology) project to obtain a snapshot of the marine microbial biodiversity and function of the world’s oceans. It is a simultaneous global mega-sequencing campaign aiming to generate the largest standardized microbial data set in a single day. This will be achievable only through the coordinated efforts of an Ocean Sampling Day Consortium, supportive partnerships and networks between sites. This commentary outlines the establishment, function and aims of the Consortium and describes our vision for a sustainable study of marine microbial communities and theirmore » embedded functional traits.« less
Ultrastructure of the mink parotid gland.
Tandler, B
1991-01-01
Acini in the parotid gland of the North American mink (Mustela vision) are composed of seromucous cells that contain secretory granules of peculiar morphology. Many of the granules consist of a light matrix in which is embedded an inclusion made up of dense, frequently parallel rodlets in a fibrillar material of moderate density. Like the submandibular gland of the same animal, the tall cells of the parotid striated ducts contain numerous polygonal, often rhomboidal, crystalloids in their apical cytoplasm. These crystalloids are present equally in both sexes and are as abundant in the parotid as in the submandibular gland. Images Fig. 1 Fig. 2 Fig. 3 Fig. 4 Fig. 5 PMID:1769893
NASA Technical Reports Server (NTRS)
Lewandowski, Leon; Struckman, Keith
1994-01-01
Microwave Vision (MV), a concept originally developed in 1985, could play a significant role in the solution to robotic vision problems. Originally our Microwave Vision concept was based on a pattern matching approach employing computer based stored replica correlation processing. Artificial Neural Network (ANN) processor technology offers an attractive alternative to the correlation processing approach, namely the ability to learn and to adapt to changing environments. This paper describes the Microwave Vision concept, some initial ANN-MV experiments, and the design of an ANN-MV system that has led to a second patent disclosure in the robotic vision field.
Localized attacks on spatially embedded networks with dependencies.
Berezin, Yehiel; Bashan, Amir; Danziger, Michael M; Li, Daqing; Havlin, Shlomo
2015-03-11
Many real world complex systems such as critical infrastructure networks are embedded in space and their components may depend on one another to function. They are also susceptible to geographically localized damage caused by malicious attacks or natural disasters. Here, we study a general model of spatially embedded networks with dependencies under localized attacks. We develop a theoretical and numerical approach to describe and predict the effects of localized attacks on spatially embedded systems with dependencies. Surprisingly, we find that a localized attack can cause substantially more damage than an equivalent random attack. Furthermore, we find that for a broad range of parameters, systems which appear stable are in fact metastable. Though robust to random failures-even of finite fraction-if subjected to a localized attack larger than a critical size which is independent of the system size (i.e., a zero fraction), a cascading failure emerges which leads to complete system collapse. Our results demonstrate the potential high risk of localized attacks on spatially embedded network systems with dependencies and may be useful for designing more resilient systems.
Tarafder, Solaiman; Koch, Alia; Jun, Yena; Chou, Conrad; Awadallah, Mary R; Lee, Chang H
2016-04-25
Three dimensional (3D) printing has emerged as an efficient tool for tissue engineering and regenerative medicine, given its advantages for constructing custom-designed scaffolds with tunable microstructure/physical properties. Here we developed a micro-precise spatiotemporal delivery system embedded in 3D printed scaffolds. PLGA microspheres (μS) were encapsulated with growth factors (GFs) and then embedded inside PCL microfibers that constitute custom-designed 3D scaffolds. Given the substantial difference in the melting points between PLGA and PCL and their low heat conductivity, μS were able to maintain its original structure while protecting GF's bioactivities. Micro-precise spatial control of multiple GFs was achieved by interchanging dispensing cartridges during a single printing process. Spatially controlled delivery of GFs, with a prolonged release, guided formation of multi-tissue interfaces from bone marrow derived mesenchymal stem/progenitor cells (MSCs). To investigate efficacy of the micro-precise delivery system embedded in 3D printed scaffold, temporomandibular joint (TMJ) disc scaffolds were fabricated with micro-precise spatiotemporal delivery of CTGF and TGFβ3, mimicking native-like multiphase fibrocartilage. In vitro, TMJ disc scaffolds spatially embedded with CTGF/TGFβ3-μS resulted in formation of multiphase fibrocartilaginous tissues from MSCs. In vivo, TMJ disc perforation was performed in rabbits, followed by implantation of CTGF/TGFβ3-μS-embedded scaffolds. After 4 wks, CTGF/TGFβ3-μS embedded scaffolds significantly improved healing of the perforated TMJ disc as compared to the degenerated TMJ disc in the control group with scaffold embedded with empty μS. In addition, CTGF/TGFβ3-μS embedded scaffolds significantly prevented arthritic changes on TMJ condyles. In conclusion, our micro-precise spatiotemporal delivery system embedded in 3D printing may serve as an efficient tool to regenerate complex and inhomogeneous tissues.
Cluster Computing for Embedded/Real-Time Systems
NASA Technical Reports Server (NTRS)
Katz, D.; Kepner, J.
1999-01-01
Embedded and real-time systems, like other computing systems, seek to maximize computing power for a given price, and thus can significantly benefit from the advancing capabilities of cluster computing.
Design and realization of flash translation layer in tiny embedded system
NASA Astrophysics Data System (ADS)
Ren, Xiaoping; Sui, Chaoya; Luo, Zhenghua; Cao, Wenji
2018-05-01
We design a solution of tiny embedded device NAND Flash storage system on the basis of deeply studying the characteristics of widely used NAND Flash in the embedded devices in order to adapt to the development of intelligent interconnection trend and solve the storage problem of large data volume in tiny embedded system. The hierarchical structure and function purposes of the system are introduced. The design and realization of address mapping, error correction, bad block management, wear balance, garbage collection and other algorithms in flash memory transformation layer are described in details. NAND Flash drive and management are realized on STM32 micro-controller, thereby verifying design effectiveness and feasibility.
Implementation of image transmission server system using embedded Linux
NASA Astrophysics Data System (ADS)
Park, Jong-Hyun; Jung, Yeon Sung; Nam, Boo Hee
2005-12-01
In this paper, we performed the implementation of image transmission server system using embedded system that is for the specified object and easy to install and move. Since the embedded system has lower capability than the PC, we have to reduce the quantity of calculation of the baseline JPEG image compression and transmission. We used the Redhat Linux 9.0 OS at the host PC and the target board based on embedded Linux. The image sequences are obtained from the camera attached to the FPGA (Field Programmable Gate Array) board with ALTERA cooperation chip. For effectiveness and avoiding some constraints from the vendor's own, we made the device driver using kernel module.
Wright, Cameron H G; Barrett, Steven F; Pack, Daniel J
2005-01-01
We describe a new approach to attacking the problem of robust computer vision for mobile robots. The overall strategy is to mimic the biological evolution of animal vision systems. Our basic imaging sensor is based upon the eye of the common house fly, Musca domestica. The computational algorithms are a mix of traditional image processing, subspace techniques, and multilayer neural networks.
Vision function testing for a suprachoroidal retinal prosthesis: effects of image filtering
NASA Astrophysics Data System (ADS)
Barnes, Nick; Scott, Adele F.; Lieby, Paulette; Petoe, Matthew A.; McCarthy, Chris; Stacey, Ashley; Ayton, Lauren N.; Sinclair, Nicholas C.; Shivdasani, Mohit N.; Lovell, Nigel H.; McDermott, Hugh J.; Walker, Janine G.; BVA Consortium,the
2016-06-01
Objective. One strategy to improve the effectiveness of prosthetic vision devices is to process incoming images to ensure that key information can be perceived by the user. This paper presents the first comprehensive results of vision function testing for a suprachoroidal retinal prosthetic device utilizing of 20 stimulating electrodes. Further, we investigate whether using image filtering can improve results on a light localization task for implanted participants compared to minimal vision processing. No controlled implanted participant studies have yet investigated whether vision processing methods that are not task-specific can lead to improved results. Approach. Three participants with profound vision loss from retinitis pigmentosa were implanted with a suprachoroidal retinal prosthesis. All three completed multiple trials of a light localization test, and one participant completed multiple trials of acuity tests. The visual representations used were: Lanczos2 (a high quality Nyquist bandlimited downsampling filter); minimal vision processing (MVP); wide view regional averaging filtering (WV); scrambled; and, system off. Main results. Using Lanczos2, all three participants successfully completed a light localization task and obtained a significantly higher percentage of correct responses than using MVP (p≤slant 0.025) or with system off (p\\lt 0.0001). Further, in a preliminary result using Lanczos2, one participant successfully completed grating acuity and Landolt C tasks, and showed significantly better performance (p=0.004) compared to WV, scrambled and system off on the grating acuity task. Significance. Participants successfully completed vision tasks using a 20 electrode suprachoroidal retinal prosthesis. Vision processing with a Nyquist bandlimited image filter has shown an advantage for a light localization task. This result suggests that this and targeted, more advanced vision processing schemes may become important components of retinal prostheses to enhance performance. ClinicalTrials.gov Identifier: NCT01603576.
Research into the Architecture of CAD Based Robot Vision Systems
1988-02-09
Vision and "Automatic Generation of Recognition Features for Com- puter Vision," Mudge, Turney and Volz, published in Robotica (1987). All of the...Occluded Parts," (T.N. Mudge, J.L. Turney, and R.A. Volz), Robotica , vol. 5, 1987, pp. 117-127. 5. "Vision Algorithms for Hypercube Machines," (T.N. Mudge
Vision, Leadership, and Change: The Case of Ramah Summer Camps
ERIC Educational Resources Information Center
Reimer, Joseph
2010-01-01
In his retrospective essay, Seymour Fox (1997) identified "vision" as the essential element that shaped the Ramah camp system. I will take a critical look at Fox's main claims: (1) A particular model of vision was essential to the development of Camp Ramah; and (2) That model of vision should guide contemporary Jewish educators in creating Jewish…
Image segmentation for enhancing symbol recognition in prosthetic vision.
Horne, Lachlan; Barnes, Nick; McCarthy, Chris; He, Xuming
2012-01-01
Current and near-term implantable prosthetic vision systems offer the potential to restore some visual function, but suffer from poor resolution and dynamic range of induced phosphenes. This can make it difficult for users of prosthetic vision systems to identify symbolic information (such as signs) except in controlled conditions. Using image segmentation techniques from computer vision, we show it is possible to improve the clarity of such symbolic information for users of prosthetic vision implants in uncontrolled conditions. We use image segmentation to automatically divide a natural image into regions, and using a fixation point controlled by the user, select a region to phosphenize. This technique improves the apparent contrast and clarity of symbolic information over traditional phosphenization approaches.
77 FR 21861 - Special Conditions: Boeing, Model 777F; Enhanced Flight Vision System
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-12
... System AGENCY: Federal Aviation Administration (FAA), DOT. ACTION: Final special conditions; request for... with an advanced, enhanced flight vision system (EFVS). The EFVS consists of a head-up display (HUD) system modified to display forward-looking infrared (FLIR) imagery. The applicable airworthiness...
NASA Technical Reports Server (NTRS)
Alexander, Amy L.; Prinzel, Lawrence J., III; Wickens, Christopher D.; Kramer, Lynda J.; Arthur, Jarvis J.; Bailey, Randall E.
2007-01-01
Synthetic vision systems provide an in-cockpit view of terrain and other hazards via a computer-generated display representation. Two experiments examined several display concepts for synthetic vision and evaluated how such displays modulate pilot performance. Experiment 1 (24 general aviation pilots) compared three navigational display (ND) concepts: 2D coplanar, 3D, and split-screen. Experiment 2 (12 commercial airline pilots) evaluated baseline 'blue sky/brown ground' or synthetic vision-enabled primary flight displays (PFDs) and three ND concepts: 2D coplanar with and without synthetic vision and a dynamic multi-mode rotatable exocentric format. In general, the results pointed to an overall advantage for a split-screen format, whether it be stand-alone (Experiment 1) or available via rotatable viewpoints (Experiment 2). Furthermore, Experiment 2 revealed benefits associated with utilizing synthetic vision in both the PFD and ND representations and the value of combined ego- and exocentric presentations.
Panoramic stereo sphere vision
NASA Astrophysics Data System (ADS)
Feng, Weijia; Zhang, Baofeng; Röning, Juha; Zong, Xiaoning; Yi, Tian
2013-01-01
Conventional stereo vision systems have a small field of view (FOV) which limits their usefulness for certain applications. While panorama vision is able to "see" in all directions of the observation space, scene depth information is missed because of the mapping from 3D reference coordinates to 2D panoramic image. In this paper, we present an innovative vision system which builds by a special combined fish-eye lenses module, and is capable of producing 3D coordinate information from the whole global observation space and acquiring no blind area 360°×360° panoramic image simultaneously just using single vision equipment with one time static shooting. It is called Panoramic Stereo Sphere Vision (PSSV). We proposed the geometric model, mathematic model and parameters calibration method in this paper. Specifically, video surveillance, robotic autonomous navigation, virtual reality, driving assistance, multiple maneuvering target tracking, automatic mapping of environments and attitude estimation are some of the applications which will benefit from PSSV.
Modeling of Embedded Human Systems
2013-07-01
ISAT study [7] for DARPA in 20051 concretized the notion of an embedded human, who is a necessary component of the system. The proposed work integrates...Technology, IEEE Transactions on, vol. 16, no. 2, pp. 229–244, March 2008. [7] C. J. Tomlin and S. S. Sastry, “Embedded humans,” tech. rep., DARPA ISAT
NASA Astrophysics Data System (ADS)
Nebashi, Ryusuke; Sakimura, Noboru; Sugibayashi, Tadahiko
2017-08-01
We evaluated the soft-error tolerance and energy consumption of an embedded computer with magnetic random access memory (MRAM) using two computer simulators. One is a central processing unit (CPU) simulator of a typical embedded computer system. We simulated the radiation-induced single-event-upset (SEU) probability in a spin-transfer-torque MRAM cell and also the failure rate of a typical embedded computer due to its main memory SEU error. The other is a delay tolerant network (DTN) system simulator. It simulates the power dissipation of wireless sensor network nodes of the system using a revised CPU simulator and a network simulator. We demonstrated that the SEU effect on the embedded computer with 1 Gbit MRAM-based working memory is less than 1 failure in time (FIT). We also demonstrated that the energy consumption of the DTN sensor node with MRAM-based working memory can be reduced to 1/11. These results indicate that MRAM-based working memory enhances the disaster tolerance of embedded computers.
Enhanced operator perception through 3D vision and haptic feedback
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Light, Kenneth; Bodenhamer, Andrew; Bosscher, Paul; Wilkinson, Loren
2012-06-01
Polaris Sensor Technologies (PST) has developed a stereo vision upgrade kit for TALON® robot systems comprised of a replacement gripper camera and a replacement mast zoom camera on the robot, and a replacement display in the Operator Control Unit (OCU). Harris Corporation has developed a haptic manipulation upgrade for TALON® robot systems comprised of a replacement arm and gripper and an OCU that provides haptic (force) feedback. PST and Harris have recently collaborated to integrate the 3D vision system with the haptic manipulation system. In multiple studies done at Fort Leonard Wood, Missouri it has been shown that 3D vision and haptics provide more intuitive perception of complicated scenery and improved robot arm control, allowing for improved mission performance and the potential for reduced time on target. This paper discusses the potential benefits of these enhancements to robotic systems used for the domestic homeland security mission.
Technical Challenges in the Development of a NASA Synthetic Vision System Concept
NASA Technical Reports Server (NTRS)
Bailey, Randall E.; Parrish, Russell V.; Kramer, Lynda J.; Harrah, Steve; Arthur, J. J., III
2002-01-01
Within NASA's Aviation Safety Program, the Synthetic Vision Systems Project is developing display system concepts to improve pilot terrain/situation awareness by providing a perspective synthetic view of the outside world through an on-board database driven by precise aircraft positioning information updating via Global Positioning System-based data. This work is aimed at eliminating visibility-induced errors and low visibility conditions as a causal factor to civil aircraft accidents, as well as replicating the operational benefits of clear day flight operations regardless of the actual outside visibility condition. Synthetic vision research and development activities at NASA Langley Research Center are focused around a series of ground simulation and flight test experiments designed to evaluate, investigate, and assess the technology which can lead to operational and certified synthetic vision systems. The technical challenges that have been encountered and that are anticipated in this research and development activity are summarized.
Time-Centric Models For Designing Embedded Cyber-physical Systems
2009-10-09
Time -centric Models For Designing Embedded Cyber- physical Systems John C. Eidson Edward A. Lee Slobodan Matic Sanjit A. Seshia Jia Zou Electrical... Time -centric Models For Designing Embedded Cyber-physical Systems ∗ John C. Eidson , Edward A. Lee, Slobodan Matic, Sanjit A. Seshia, Jia Zou...implementations, such a uniform notion of time cannot be precisely realized. Time triggered networks [10] and time synchronization [9] can be used to
A Vision in Jeopardy: Royal Navy Maritime Autonomous Systems (MAS)
2017-03-31
Chapter 6 will propose a new MAS vision for the RN. However, before doing so, a fresh look at the problem is required. Consensus of the Problem, Not the... assessment , the RN has failed to deliver any sustainable MAS operational capability. A vision for MAS finally materialized in 2014. Yet, the vision...continuous investment and assessment , the RN has failed to deliver any sustainable MAS operational capability. A vision for MAS finally materialized in
Robot path planning using expert systems and machine vision
NASA Astrophysics Data System (ADS)
Malone, Denis E.; Friedrich, Werner E.
1992-02-01
This paper describes a system developed for the robotic processing of naturally variable products. In order to plan the robot motion path it was necessary to use a sensor system, in this case a machine vision system, to observe the variations occurring in workpieces and interpret this with a knowledge based expert system. The knowledge base was acquired by carrying out an in-depth study of the product using examination procedures not available in the robotic workplace and relates the nature of the required path to the information obtainable from the machine vision system. The practical application of this system to the processing of fish fillets is described and used to illustrate the techniques.
Embedded object concept with a telepresence robot system
NASA Astrophysics Data System (ADS)
Vallius, Tero; Röning, Juha
2005-10-01
This paper presents the Embedded Object Concept (EOC) and a telepresence robot system which is a test case for the EOC. The EOC utilizes common object-oriented methods used in software by applying them to combined Lego-like software-hardware entities. These entities represent objects in object-oriented design methods, and they are the building blocks of embedded systems. The goal of the EOC is to make the designing of embedded systems faster and easier. This concept enables people without comprehensive knowledge in electronics design to create new embedded systems, and for experts it shortens the design time of new embedded systems. We present the current status of the EOC, including two generations of embedded objects named Atomi objects. The first generation of the Atomi objects has been tested with different applications, and found to be functional, but not optimal. The second generation aims to correct the issues found with the first generation, and it is being tested in a relatively complex test case. The test case is a telepresence robot consisting of a two wheeled human height robot and its computer counter part. The robot has been constructed using incremental device development, which is made possible by the architecture of the EOC. The robot contains video and audio exchange capability, and a controlling and balancing system for driving with two wheels. The robot is built in two versions, the first consisting of a PDA device and Atomi objects, and the second consisting of only Atomi objects. The robot is currently incomplete, but for the most part it has been successfully tested.
Vision-Aided RAIM: A New Method for GPS Integrity Monitoring in Approach and Landing Phase
Fu, Li; Zhang, Jun; Li, Rui; Cao, Xianbin; Wang, Jinling
2015-01-01
In the 1980s, Global Positioning System (GPS) receiver autonomous integrity monitoring (RAIM) was proposed to provide the integrity of a navigation system by checking the consistency of GPS measurements. However, during the approach and landing phase of a flight path, where there is often low GPS visibility conditions, the performance of the existing RAIM method may not meet the stringent aviation requirements for availability and integrity due to insufficient observations. To solve this problem, a new RAIM method, named vision-aided RAIM (VA-RAIM), is proposed for GPS integrity monitoring in the approach and landing phase. By introducing landmarks as pseudo-satellites, the VA-RAIM enriches the navigation observations to improve the performance of RAIM. In the method, a computer vision system photographs and matches these landmarks to obtain additional measurements for navigation. Nevertheless, the challenging issue is that such additional measurements may suffer from vision errors. To ensure the reliability of the vision measurements, a GPS-based calibration algorithm is presented to reduce the time-invariant part of the vision errors. Then, the calibrated vision measurements are integrated with the GPS observations for integrity monitoring. Simulation results show that the VA-RAIM outperforms the conventional RAIM with a higher level of availability and fault detection rate. PMID:26378533
Vision-Aided RAIM: A New Method for GPS Integrity Monitoring in Approach and Landing Phase.
Fu, Li; Zhang, Jun; Li, Rui; Cao, Xianbin; Wang, Jinling
2015-09-10
In the 1980s, Global Positioning System (GPS) receiver autonomous integrity monitoring (RAIM) was proposed to provide the integrity of a navigation system by checking the consistency of GPS measurements. However, during the approach and landing phase of a flight path, where there is often low GPS visibility conditions, the performance of the existing RAIM method may not meet the stringent aviation requirements for availability and integrity due to insufficient observations. To solve this problem, a new RAIM method, named vision-aided RAIM (VA-RAIM), is proposed for GPS integrity monitoring in the approach and landing phase. By introducing landmarks as pseudo-satellites, the VA-RAIM enriches the navigation observations to improve the performance of RAIM. In the method, a computer vision system photographs and matches these landmarks to obtain additional measurements for navigation. Nevertheless, the challenging issue is that such additional measurements may suffer from vision errors. To ensure the reliability of the vision measurements, a GPS-based calibration algorithm is presented to reduce the time-invariant part of the vision errors. Then, the calibrated vision measurements are integrated with the GPS observations for integrity monitoring. Simulation results show that the VA-RAIM outperforms the conventional RAIM with a higher level of availability and fault detection rate.
Gait disorder rehabilitation using vision and non-vision based sensors: A systematic review
Ali, Asraf; Sundaraj, Kenneth; Ahmad, Badlishah; Ahamed, Nizam; Islam, Anamul
2012-01-01
Even though the amount of rehabilitation guidelines has never been greater, uncertainty continues to arise regarding the efficiency and effectiveness of the rehabilitation of gait disorders. This question has been hindered by the lack of information on accurate measurements of gait disorders. Thus, this article reviews the rehabilitation systems for gait disorder using vision and non-vision sensor technologies, as well as the combination of these. All papers published in the English language between 1990 and June, 2012 that had the phrases “gait disorder” “rehabilitation”, “vision sensor”, or “non vision sensor” in the title, abstract, or keywords were identified from the SpringerLink, ELSEVIER, PubMed, and IEEE databases. Some synonyms of these phrases and the logical words “and” “or” and “not” were also used in the article searching procedure. Out of the 91 published articles found, this review identified 84 articles that described the rehabilitation of gait disorders using different types of sensor technologies. This literature set presented strong evidence for the development of rehabilitation systems using a markerless vision-based sensor technology. We therefore believe that the information contained in this review paper will assist the progress of the development of rehabilitation systems for human gait disorders. PMID:22938548
The genetics of normal and defective color vision.
Neitz, Jay; Neitz, Maureen
2011-04-13
The contributions of genetics research to the science of normal and defective color vision over the previous few decades are reviewed emphasizing the developments in the 25years since the last anniversary issue of Vision Research. Understanding of the biology underlying color vision has been vaulted forward through the application of the tools of molecular genetics. For all their complexity, the biological processes responsible for color vision are more accessible than for many other neural systems. This is partly because of the wealth of genetic variations that affect color perception, both within and across species, and because components of the color vision system lend themselves to genetic manipulation. Mutations and rearrangements in the genes encoding the long, middle, and short wavelength sensitive cone pigments are responsible for color vision deficiencies and mutations have been identified that affect the number of cone types, the absorption spectra of the pigments, the functionality and viability of the cones, and the topography of the cone mosaic. The addition of an opsin gene, as occurred in the evolution of primate color vision, and has been done in experimental animals can produce expanded color vision capacities and this has provided insight into the underlying neural circuitry. Copyright © 2010 Elsevier Ltd. All rights reserved.
Acquired color vision deficiency.
Simunovic, Matthew P
2016-01-01
Acquired color vision deficiency occurs as the result of ocular, neurologic, or systemic disease. A wide array of conditions may affect color vision, ranging from diseases of the ocular media through to pathology of the visual cortex. Traditionally, acquired color vision deficiency is considered a separate entity from congenital color vision deficiency, although emerging clinical and molecular genetic data would suggest a degree of overlap. We review the pathophysiology of acquired color vision deficiency, the data on its prevalence, theories for the preponderance of acquired S-mechanism (or tritan) deficiency, and discuss tests of color vision. We also briefly review the types of color vision deficiencies encountered in ocular disease, with an emphasis placed on larger or more detailed clinical investigations. Copyright © 2016 Elsevier Inc. All rights reserved.
A vision-based end-point control for a two-link flexible manipulator. M.S. Thesis
NASA Technical Reports Server (NTRS)
Obergfell, Klaus
1991-01-01
The measurement and control of the end-effector position of a large two-link flexible manipulator are investigated. The system implementation is described and an initial algorithm for static end-point positioning is discussed. Most existing robots are controlled through independent joint controllers, while the end-effector position is estimated from the joint positions using a kinematic relation. End-point position feedback can be used to compensate for uncertainty and structural deflections. Such feedback is especially important for flexible robots. Computer vision is utilized to obtain end-point position measurements. A look-and-move control structure alleviates the disadvantages of the slow and variable computer vision sampling frequency. This control structure consists of an inner joint-based loop and an outer vision-based loop. A static positioning algorithm was implemented and experimentally verified. This algorithm utilizes the manipulator Jacobian to transform a tip position error to a joint error. The joint error is then used to give a new reference input to the joint controller. The convergence of the algorithm is demonstrated experimentally under payload variation. A Landmark Tracking System (Dickerson, et al 1990) is used for vision-based end-point measurements. This system was modified and tested. A real-time control system was implemented on a PC and interfaced with the vision system and the robot.
Sabattini, E; Bisgaard, K; Ascani, S; Poggi, S; Piccioli, M; Ceccarelli, C; Pieri, F; Fraternali-Orcioni, G; Pileri, S A
1998-07-01
To assess a newly developed immunohistochemical detection system, the EnVision++. A large series of differently processed normal and pathological samples and 53 relevant monoclonal antibodies were chosen. A chessboard titration assay was used to compare the results provided by the EnVision++ system with those of the APAAP, CSA, LSAB, SABC, and ChemMate methods, when applied either manually or in a TechMate 500 immunostainer. With the vast majority of the antibodies, EnVision++ allowed two- to fivefold higher dilutions than the APAAP, LSAB, SABC, and ChemMate techniques, the staining intensity and percentage of expected positive cells being the same. With some critical antibodies (such as the anti-CD5), it turned out to be superior in that it achieved consistently reproducible results with differently fixed or overfixed samples. Only the CSA method, which includes tyramide based enhancement, allowed the same dilutions as the EnVision++ system, and in one instance (with the anti-cyclin D1 antibody) represented the gold standard. The EnVision++ is an easy to use system, which avoids the possibility of disturbing endogenous biotin and lowers the cost per test by increasing the dilutions of the primary antibodies. Being a two step procedure, it reduces both the assay time and the workload.
Sabattini, E; Bisgaard, K; Ascani, S; Poggi, S; Piccioli, M; Ceccarelli, C; Pieri, F; Fraternali-Orcioni, G; Pileri, S A
1998-01-01
AIM: To assess a newly developed immunohistochemical detection system, the EnVision++. METHODS: A large series of differently processed normal and pathological samples and 53 relevant monoclonal antibodies were chosen. A chessboard titration assay was used to compare the results provided by the EnVision++ system with those of the APAAP, CSA, LSAB, SABC, and ChemMate methods, when applied either manually or in a TechMate 500 immunostainer. RESULTS: With the vast majority of the antibodies, EnVision++ allowed two- to fivefold higher dilutions than the APAAP, LSAB, SABC, and ChemMate techniques, the staining intensity and percentage of expected positive cells being the same. With some critical antibodies (such as the anti-CD5), it turned out to be superior in that it achieved consistently reproducible results with differently fixed or overfixed samples. Only the CSA method, which includes tyramide based enhancement, allowed the same dilutions as the EnVision++ system, and in one instance (with the anti-cyclin D1 antibody) represented the gold standard. CONCLUSIONS: The EnVision++ is an easy to use system, which avoids the possibility of disturbing endogenous biotin and lowers the cost per test by increasing the dilutions of the primary antibodies. Being a two step procedure, it reduces both the assay time and the workload. Images PMID:9797726
A lightweight, inexpensive robotic system for insect vision.
Sabo, Chelsea; Chisholm, Robert; Petterson, Adam; Cope, Alex
2017-09-01
Designing hardware for miniaturized robotics which mimics the capabilities of flying insects is of interest, because they share similar constraints (i.e. small size, low weight, and low energy consumption). Research in this area aims to enable robots with similarly efficient flight and cognitive abilities. Visual processing is important to flying insects' impressive flight capabilities, but currently, embodiment of insect-like visual systems is limited by the hardware systems available. Suitable hardware is either prohibitively expensive, difficult to reproduce, cannot accurately simulate insect vision characteristics, and/or is too heavy for small robotic platforms. These limitations hamper the development of platforms for embodiment which in turn hampers the progress on understanding of how biological systems fundamentally work. To address this gap, this paper proposes an inexpensive, lightweight robotic system for modelling insect vision. The system is mounted and tested on a robotic platform for mobile applications, and then the camera and insect vision models are evaluated. We analyse the potential of the system for use in embodiment of higher-level visual processes (i.e. motion detection) and also for development of navigation based on vision for robotics in general. Optic flow from sample camera data is calculated and compared to a perfect, simulated bee world showing an excellent resemblance. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Implementation of a robotic flexible assembly system
NASA Technical Reports Server (NTRS)
Benton, Ronald C.
1987-01-01
As part of the Intelligent Task Automation program, a team developed enabling technologies for programmable, sensory controlled manipulation in unstructured environments. These technologies include 2-D/3-D vision sensing and understanding, force sensing and high speed force control, 2.5-D vision alignment and control, and multiple processor architectures. The subsequent design of a flexible, programmable, sensor controlled robotic assembly system for small electromechanical devices is described using these technologies and ongoing implementation and integration efforts. Using vision, the system picks parts dumped randomly in a tray. Using vision and force control, it performs high speed part mating, in-process monitoring/verification of expected results and autonomous recovery from some errors. It is programmed off line with semiautomatic action planning.
Protyping machine vision software on the World Wide Web
NASA Astrophysics Data System (ADS)
Karantalis, George; Batchelor, Bruce G.
1998-10-01
Interactive image processing is a proven technique for analyzing industrial vision applications and building prototype systems. Several of the previous implementations have used dedicated hardware to perform the image processing, with a top layer of software providing a convenient user interface. More recently, self-contained software packages have been devised and these run on a standard computer. The advent of the Java programming language has made it possible to write platform-independent software, operating over the Internet, or a company-wide Intranet. Thus, there arises the possibility of designing at least some shop-floor inspection/control systems, without the vision engineer ever entering the factories where they will be used. It successful, this project will have a major impact on the productivity of vision systems designers.
Data-Fusion for a Vision-Aided Radiological Detection System: Sensor dependence and Source Tracking
NASA Astrophysics Data System (ADS)
Stadnikia, Kelsey; Martin, Allan; Henderson, Kristofer; Koppal, Sanjeev; Enqvist, Andreas
2018-01-01
The University of Florida is taking a multidisciplinary approach to fuse the data between 3D vision sensors and radiological sensors in hopes of creating a system capable of not only detecting the presence of a radiological threat, but also tracking it. The key to developing such a vision-aided radiological detection system, lies in the count rate being inversely dependent on the square of the distance. Presented in this paper are the results of the calibration algorithm used to predict the location of the radiological detectors based on 3D distance from the source to the detector (vision data) and the detectors count rate (radiological data). Also presented are the results of two correlation methods used to explore source tracking.
Test of Lander Vision System for Mars 2020
2016-10-04
A prototype of the Lander Vision System for NASA Mars 2020 mission was tested in this Dec. 9, 2014, flight of a Masten Space Systems Xombie vehicle at Mojave Air and Space Port in California. http://photojournal.jpl.nasa.gov/catalog/PIA20848
Driver's Enhanced Vision System (DEVS)
DOT National Transportation Integrated Search
1996-12-23
This advisory circular (AC) contains performance standards, specifications, and : recommendations for Drivers Enhanced Vision sSystem (DEVS). The FAA recommends : the use of the guidance in this publication for the design and installation of : DEVS e...
Component Composition for Embedded Systems Using Semantic Aspect-Oriented Programming
2004-10-01
real - time systems for the defense community. Our research focused on Real-Time Java implementation and analysis techniques. Real-Time Java is important for the defense community because it holds out the promise of enabling developers to apply COTS Java technology to specialized military embedded systems. It also promises to allow the defense community to utilize a large Java-literate workforce for building defense systems. Our research has delivered several techniques that may make Real-Time Java a better platform for developing embedded
Vertically integrated photonic multichip module architecture for vision applications
NASA Astrophysics Data System (ADS)
Tanguay, Armand R., Jr.; Jenkins, B. Keith; von der Malsburg, Christoph; Mel, Bartlett; Holt, Gary; O'Brien, John D.; Biederman, Irving; Madhukar, Anupam; Nasiatka, Patrick; Huang, Yunsong
2000-05-01
The development of a truly smart camera, with inherent capability for low latency semi-autonomous object recognition, tracking, and optimal image capture, has remained an elusive goal notwithstanding tremendous advances in the processing power afforded by VLSI technologies. These features are essential for a number of emerging multimedia- based applications, including enhanced augmented reality systems. Recent advances in understanding of the mechanisms of biological vision systems, together with similar advances in hybrid electronic/photonic packaging technology, offer the possibility of artificial biologically-inspired vision systems with significantly different, yet complementary, strengths and weaknesses. We describe herein several system implementation architectures based on spatial and temporal integration techniques within a multilayered structure, as well as the corresponding hardware implementation of these architectures based on the hybrid vertical integration of multiple silicon VLSI vision chips by means of dense 3D photonic interconnections.
Design of direct-vision cyclo-olefin-polymer double Amici prism for spectral imaging.
Wang, Lei; Shao, Zhengzheng; Tang, Wusheng; Liu, Jiying; Nie, Qianwen; Jia, Hui; Dai, Suian; Zhu, Jubo; Li, Xiujian
2017-10-20
A direct-vision Amici prism is a desired dispersion element in the value of spectrometers and spectral imaging systems. In this paper, we focus on designing a direct-vision cyclo-olefin-polymer double Amici prism for spectral imaging systems. We illustrate a designed structure: E48R/N-SF4/E48R, from which we obtain 13 deg dispersion across the visible spectrum, which is equivalent to 700 line pairs/mm grating. We construct a simulative spectral imaging system with the designed direct-vision cyclo-olefin-polymer double Amici prism in optical design software and compare its imaging performance to a glass double Amici prism in the same system. The results of spot-size RMS demonstrate that the plastic prism can serve as well as their glass competitors and have better spectral resolution.
Near real-time stereo vision system
NASA Technical Reports Server (NTRS)
Anderson, Charles H. (Inventor); Matthies, Larry H. (Inventor)
1993-01-01
The apparatus for a near real-time stereo vision system for use with a robotic vehicle is described. The system is comprised of two cameras mounted on three-axis rotation platforms, image-processing boards, a CPU, and specialized stereo vision algorithms. Bandpass-filtered image pyramids are computed, stereo matching is performed by least-squares correlation, and confidence ranges are estimated by means of Bayes' theorem. In particular, Laplacian image pyramids are built and disparity maps are produced from the 60 x 64 level of the pyramids at rates of up to 2 seconds per image pair. The first autonomous cross-country robotic traverses (of up to 100 meters) have been achieved using the stereo vision system of the present invention with all computing done onboard the vehicle. The overall approach disclosed herein provides a unifying paradigm for practical domain-independent stereo ranging.
Vetrella, Amedeo Rodi; Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio
2016-12-17
Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS) receivers and Micro-Electro-Mechanical Systems (MEMS)-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS) receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase) exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision) to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information.
A Study of the Ethernet Troughput Performance of the Embedded System
NASA Astrophysics Data System (ADS)
Duan, Zhi-Yu; Zhao, Zhao-Wang
2007-09-01
An ethernet acceleration solution developed for the NIOS II Embedded System in astronomical applications - Mason Express is introduced in this paper. By manually constructing the proper network protocol headers and directly driving the hardware, Mason Express goes around the performance bottleneck of the Light Weighted IP stack (LWIP), and achieves up to 90Mb/s unidirectional data troughput rate from the embedded system board to the data collecting computer. With the LWIP stack, the maximum data rate is about 10.57Mb/s. Mason Express is a total software solution and no hardware changes required, neither does it affect the uCOS II operating system nor the LWIP stack, and can be implemented with or without any embedded operating system. It maximally protects the intelligence investment of the users.
Embedded system of image storage based on fiber channel
NASA Astrophysics Data System (ADS)
Chen, Xiaodong; Su, Wanxin; Xing, Zhongbao; Wang, Hualong
2008-03-01
In domains of aerospace, aviation, aiming, and optic measure etc., the embedded system of imaging, processing and recording is absolutely necessary, which has small volume, high processing speed and high resolution. But the embedded storage technology becomes system bottleneck because of developing slowly. It is used to use RAID to promote storage speed, but it is unsuitable for the embedded system because of its big volume. Fiber channel (FC) technology offers a new method to develop the high-speed, portable storage system. In order to make storage subsystem meet the needs of high storage rate, make use of powerful Virtex-4 FPGA and high speed fiber channel, advance a project of embedded system of digital image storage based on Xilinx Fiber Channel Arbitrated Loop LogiCORE. This project utilizes Virtex- 4 RocketIO MGT transceivers to transmit the data serially, and connects many Fiber Channel hard drivers by using of Arbitrated Loop optionally. It can achieve 400MBps storage rate, breaks through the bottleneck of PCI interface, and has excellences of high-speed, real-time, portable and massive capacity.
[Design of an embedded stroke rehabilitation apparatus system based on Linux computer engineering].
Zhuang, Pengfei; Tian, XueLong; Zhu, Lin
2014-04-01
A realizaton project of electrical stimulator aimed at motor dysfunction of stroke is proposed in this paper. Based on neurophysiological biofeedback, this system, using an ARM9 S3C2440 as the core processor, integrates collection and display of surface electromyography (sEMG) signal, as well as neuromuscular electrical stimulation (NMES) into one system. By embedding Linux system, the project is able to use Qt/Embedded as a graphical interface design tool to accomplish the design of stroke rehabilitation apparatus. Experiments showed that this system worked well.
Embedded expert system for space shuttle main engine maintenance
NASA Technical Reports Server (NTRS)
Pooley, J.; Thompson, W.; Homsley, T.; Teoh, W.; Jones, J.; Lewallen, P.
1987-01-01
The SPARTA Embedded Expert System (SEES) is an intelligent health monitoring system that directs analysis by placing confidence factors on possible engine status and then recommends a course of action to an engineer or engine controller. The technique can prevent catastropic failures or costly rocket engine down time because of false alarms. Further, the SEES has potential as an on-board flight monitor for reusable rocket engine systems. The SEES methodology synergistically integrates vibration analysis, pattern recognition and communications theory techniques with an artificial intelligence technique - the Embedded Expert System (EES).
Machine vision for digital microfluidics
NASA Astrophysics Data System (ADS)
Shin, Yong-Jun; Lee, Jeong-Bong
2010-01-01
Machine vision is widely used in an industrial environment today. It can perform various tasks, such as inspecting and controlling production processes, that may require humanlike intelligence. The importance of imaging technology for biological research or medical diagnosis is greater than ever. For example, fluorescent reporter imaging enables scientists to study the dynamics of gene networks with high spatial and temporal resolution. Such high-throughput imaging is increasingly demanding the use of machine vision for real-time analysis and control. Digital microfluidics is a relatively new technology with expectations of becoming a true lab-on-a-chip platform. Utilizing digital microfluidics, only small amounts of biological samples are required and the experimental procedures can be automatically controlled. There is a strong need for the development of a digital microfluidics system integrated with machine vision for innovative biological research today. In this paper, we show how machine vision can be applied to digital microfluidics by demonstrating two applications: machine vision-based measurement of the kinetics of biomolecular interactions and machine vision-based droplet motion control. It is expected that digital microfluidics-based machine vision system will add intelligence and automation to high-throughput biological imaging in the future.
Exact density functional and wave function embedding schemes based on orbital localization
NASA Astrophysics Data System (ADS)
Hégely, Bence; Nagy, Péter R.; Ferenczy, György G.; Kállay, Mihály
2016-08-01
Exact schemes for the embedding of density functional theory (DFT) and wave function theory (WFT) methods into lower-level DFT or WFT approaches are introduced utilizing orbital localization. First, a simple modification of the projector-based embedding scheme of Manby and co-workers [J. Chem. Phys. 140, 18A507 (2014)] is proposed. We also use localized orbitals to partition the system, but instead of augmenting the Fock operator with a somewhat arbitrary level-shift projector we solve the Huzinaga-equation, which strictly enforces the Pauli exclusion principle. Second, the embedding of WFT methods in local correlation approaches is studied. Since the latter methods split up the system into local domains, very simple embedding theories can be defined if the domains of the active subsystem and the environment are treated at a different level. The considered embedding schemes are benchmarked for reaction energies and compared to quantum mechanics (QM)/molecular mechanics (MM) and vacuum embedding. We conclude that for DFT-in-DFT embedding, the Huzinaga-equation-based scheme is more efficient than the other approaches, but QM/MM or even simple vacuum embedding is still competitive in particular cases. Concerning the embedding of wave function methods, the clear winner is the embedding of WFT into low-level local correlation approaches, and WFT-in-DFT embedding can only be more advantageous if a non-hybrid density functional is employed.
A computer vision system for the recognition of trees in aerial photographs
NASA Technical Reports Server (NTRS)
Pinz, Axel J.
1991-01-01
Increasing problems of forest damage in Central Europe set the demand for an appropriate forest damage assessment tool. The Vision Expert System (VES) is presented which is capable of finding trees in color infrared aerial photographs. Concept and architecture of VES are discussed briefly. The system is applied to a multisource test data set. The processing of this multisource data set leads to a multiple interpretation result for one scene. An integration of these results will provide a better scene description by the vision system. This is achieved by an implementation of Steven's correlation algorithm.
Eye vision system using programmable micro-optics and micro-electronics
NASA Astrophysics Data System (ADS)
Riza, Nabeel A.; Amin, M. Junaid; Riza, Mehdi N.
2014-02-01
Proposed is a novel eye vision system that combines the use of advanced micro-optic and microelectronic technologies that includes programmable micro-optic devices, pico-projectors, Radio Frequency (RF) and optical wireless communication and control links, energy harvesting and storage devices and remote wireless energy transfer capabilities. This portable light weight system can measure eye refractive powers, optimize light conditions for the eye under test, conduct color-blindness tests, and implement eye strain relief and eye muscle exercises via time sequenced imaging. Described is the basic design of the proposed system and its first stage system experimental results for vision spherical lens refractive error correction.
NASA Technical Reports Server (NTRS)
1995-01-01
NASA's Technology Transfer Office at Stennis Space Center worked with the Johns Hopkins Wilmer Eye Institute in Baltimore, Md., to incorporate NASA software originally developed by NASA to process satellite images into the Low Vision Enhancement System (LVES). The LVES, referred to as 'ELVIS' by its users, is a portable image processing system that could make it possible to improve a person's vision by enhancing and altering images to compensate for impaired eyesight. The system consists of two orientation cameras, a zoom camera, and a video projection system. The headset and hand-held control weigh about two pounds each. Pictured is Jacob Webb, the first Mississippian to use the LVES.
NASA Astrophysics Data System (ADS)
Bales, R. C.; Bernacchi, L.; Conklin, M. H.; Viers, J. H.; Fogg, G. E.; Fisher, A. T.; Kiparsky, M.
2017-12-01
California's historic drought of 2011-2015 provided excellent conditions for researchers to listen to water-management challenges from decision makers, particularly with regard to data and information needs for improved decision making. Through the UC Water Security and Sustainability Research Initiative (http://ucwater.org/) we began a multi-year dialog with water-resources decision makers and state agencies that provide data and technical support for water management. Near-term products of that collaboration will be both a vision for a 21st-century water data and information system, and near-term steps to meet immediate legislative deadlines in a way that is consistent with the longer-term vision. While many university-based water researchers engage with state and local agencies on both science and policy challenges, UC Water's focus was on: i) integrated system management, from headwaters through groundwater and agriculture, and on ii) improved decision making through better water information systems. This focus aligned with the recognition by water leaders that fundamental changes in the way the state manages water were overdue. UC Water is focused on three "I"s: improved water information, empowering Institutions to use and to create new information, and enabling decision makers to make smart investments in both green and grey Infrastructure. Effective communication with water decision makers has led to engagement on high-priority programs where large knowledge gaps remain, including more-widespread groundwater recharge of storm flows, restoration of mountain forests in important source-water areas, governance structures for groundwater sustainability, and filling information gaps by bringing new technology to bear on measurement and data programs. Continuing engagement of UC Water researchers in public dialog around water resources, through opinion pieces, feature articles, blogs, white papers, social media, video clips and a feature documentary film have also been key to our continuing engagement. These novel partnerships are leading to decision-relevant tools and an improved integrated praxis in on-the-ground water-resources management. Our research is becoming more embedded in policies and our network remains interconnected with decision makers at multiple levels.
Oyedotun, Oyebade K; Khashman, Adnan
2017-02-01
Humans are apt at recognizing patterns and discovering even abstract features which are sometimes embedded therein. Our ability to use the banknotes in circulation for business transactions lies in the effortlessness with which we can recognize the different banknote denominations after seeing them over a period of time. More significant is that we can usually recognize these banknote denominations irrespective of what parts of the banknotes are exposed to us visually. Furthermore, our recognition ability is largely unaffected even when these banknotes are partially occluded. In a similar analogy, the robustness of intelligent systems to perform the task of banknote recognition should not collapse under some minimum level of partial occlusion. Artificial neural networks are intelligent systems which from inception have taken many important cues related to structure and learning rules from the human nervous/cognition processing system. Likewise, it has been shown that advances in artificial neural network simulations can help us understand the human nervous/cognition system even furthermore. In this paper, we investigate three cognition hypothetical frameworks to vision-based recognition of banknote denominations using competitive neural networks. In order to make the task more challenging and stress-test the investigated hypotheses, we also consider the recognition of occluded banknotes. The implemented hypothetical systems are tasked to perform fast recognition of banknotes with up to 75 % occlusion. The investigated hypothetical systems are trained on Nigeria's Naira banknotes and several experiments are performed to demonstrate the findings presented within this work.
3D vision upgrade kit for the TALON robot system
NASA Astrophysics Data System (ADS)
Bodenhamer, Andrew; Pettijohn, Bradley; Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Chenault, David; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Kingston, David; Newell, Scott
2010-02-01
In September 2009 the Fort Leonard Wood Field Element of the US Army Research Laboratory - Human Research and Engineering Directorate, in conjunction with Polaris Sensor Technologies and Concurrent Technologies Corporation, evaluated the objective performance benefits of Polaris' 3D vision upgrade kit for the TALON small unmanned ground vehicle (SUGV). This upgrade kit is a field-upgradable set of two stereo-cameras and a flat panel display, using only standard hardware, data and electrical connections existing on the TALON robot. Using both the 3D vision system and a standard 2D camera and display, ten active-duty Army Soldiers completed seven scenarios designed to be representative of missions performed by military SUGV operators. Mission time savings (6.5% to 32%) were found for six of the seven scenarios when using the 3D vision system. Operators were not only able to complete tasks quicker but, for six of seven scenarios, made fewer mistakes in their task execution. Subjective Soldier feedback was overwhelmingly in support of pursuing 3D vision systems, such as the one evaluated, for fielding to combat units.
Art, Illusion and the Visual System.
ERIC Educational Resources Information Center
Livingstone, Margaret S.
1988-01-01
Describes the three part system of human vision. Explores the anatomical arrangement of the vision system from the eyes to the brain. Traces the path of various visual signals to their interpretations by the brain. Discusses human visual perception and its implications in art and design. (CW)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1987-08-01
This article reports that there are literally hundreds of machine vision systems from which to choose. They range in cost from $10,000 to $1,000,000. Most have been designed for specific applications; the same systems if used for a different application may fail dismally. How can you avoid wasting money on inferior, useless, or nonexpandable systems. A good reference is the Automated Vision Association in Ann Arbor, Mich., a trade group comprised of North American machine vision manufacturers. Reputable suppliers caution users to do their homework before making an investment. Important considerations include comprehensive details on the objects to be viewed-thatmore » is, quantity, shape, dimension, size, and configuration details; lighting characteristics and variations; component orientation details. Then, what do you expect the system to do-inspect, locate components, aid in robotic vision. Other criteria include system speed and related accuracy and reliability. What are the projected benefits and system paybacks.. Examine primarily paybacks associated with scrap and rework reduction as well as reduced warranty costs.« less
Two challenges in embedded systems design: predictability and robustness.
Henzinger, Thomas A
2008-10-28
I discuss two main challenges in embedded systems design: the challenge to build predictable systems, and that to build robust systems. I suggest how predictability can be formalized as a form of determinism, and robustness as a form of continuity.
Delay differential analysis of time series.
Lainscsek, Claudia; Sejnowski, Terrence J
2015-03-01
Nonlinear dynamical system analysis based on embedding theory has been used for modeling and prediction, but it also has applications to signal detection and classification of time series. An embedding creates a multidimensional geometrical object from a single time series. Traditionally either delay or derivative embeddings have been used. The delay embedding is composed of delayed versions of the signal, and the derivative embedding is composed of successive derivatives of the signal. The delay embedding has been extended to nonuniform embeddings to take multiple timescales into account. Both embeddings provide information on the underlying dynamical system without having direct access to all the system variables. Delay differential analysis is based on functional embeddings, a combination of the derivative embedding with nonuniform delay embeddings. Small delay differential equation (DDE) models that best represent relevant dynamic features of time series data are selected from a pool of candidate models for detection or classification. We show that the properties of DDEs support spectral analysis in the time domain where nonlinear correlation functions are used to detect frequencies, frequency and phase couplings, and bispectra. These can be efficiently computed with short time windows and are robust to noise. For frequency analysis, this framework is a multivariate extension of discrete Fourier transform (DFT), and for higher-order spectra, it is a linear and multivariate alternative to multidimensional fast Fourier transform of multidimensional correlations. This method can be applied to short or sparse time series and can be extended to cross-trial and cross-channel spectra if multiple short data segments of the same experiment are available. Together, this time-domain toolbox provides higher temporal resolution, increased frequency and phase coupling information, and it allows an easy and straightforward implementation of higher-order spectra across time compared with frequency-based methods such as the DFT and cross-spectral analysis.
Riva, Giuseppe; Graffigna, Guendalina; Baitieri, Maddalena; Amato, Alessandra; Bonanomi, Maria Grazia; Valentini, Paolo; Castelli, Guido
2014-01-01
The quest for an active and healthy ageing can be considered a "wicked problem." It is a social and cultural problem, which is difficult to solve because of incomplete, changing, and contradictory requirements. These problems are tough to manage because of their social complexity. They are a group of linked problems embedded in the structure of the communities in which they occur. First, they require the knowledge of the social and cultural context in which they occur. They can be solved only by understanding of what people do and why they do it. Second, they require a multidisciplinary approach. Wicked problems can have different solutions, so it is critical to capture the full range of possibilities and interpretations. Thus, we suggest that Università Cattolica del Sacro Cuore (UCSC) is well suited for accepting and managing this challenge because of its applied research orientation, multidisciplinary approach, and integrated vision. After presenting the research activity of UCSC, we describe a possible "systems thinking" strategy to consider the complexity and interdependence of active ageing and healthy living.
ERIC Educational Resources Information Center
Chen, Kan; Stafford, Frank P.
A case study of machine vision was conducted to identify and analyze the employment effects of high technology in general. (Machine vision is the automatic acquisition and analysis of an image to obtain desired information for use in controlling an industrial activity, such as the visual sensor system that gives eyes to a robot.) Machine vision as…
Research on an autonomous vision-guided helicopter
NASA Technical Reports Server (NTRS)
Amidi, Omead; Mesaki, Yuji; Kanade, Takeo
1994-01-01
Integration of computer vision with on-board sensors to autonomously fly helicopters was researched. The key components developed were custom designed vision processing hardware and an indoor testbed. The custom designed hardware provided flexible integration of on-board sensors with real-time image processing resulting in a significant improvement in vision-based state estimation. The indoor testbed provided convenient calibrated experimentation in constructing real autonomous systems.
Computer graphics testbed to simulate and test vision systems for space applications
NASA Technical Reports Server (NTRS)
Cheatham, John B.
1991-01-01
Research activity has shifted from computer graphics and vision systems to the broader scope of applying concepts of artificial intelligence to robotics. Specifically, the research is directed toward developing Artificial Neural Networks, Expert Systems, and Laser Imaging Techniques for Autonomous Space Robots.
A robotic vision system to measure tree traits
USDA-ARS?s Scientific Manuscript database
The autonomous measurement of tree traits, such as branching structure, branch diameters, branch lengths, and branch angles, is required for tasks such as robotic pruning of trees as well as structural phenotyping. We propose a robotic vision system called the Robotic System for Tree Shape Estimati...
Machine vision for real time orbital operations
NASA Technical Reports Server (NTRS)
Vinz, Frank L.
1988-01-01
Machine vision for automation and robotic operation of Space Station era systems has the potential for increasing the efficiency of orbital servicing, repair, assembly and docking tasks. A machine vision research project is described in which a TV camera is used for inputing visual data to a computer so that image processing may be achieved for real time control of these orbital operations. A technique has resulted from this research which reduces computer memory requirements and greatly increases typical computational speed such that it has the potential for development into a real time orbital machine vision system. This technique is called AI BOSS (Analysis of Images by Box Scan and Syntax).
Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Changan
2016-04-22
The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD) sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features.
The Recovery of Optical Quality after Laser Vision Correction
Jung, Hyeong-Gi
2013-01-01
Purpose To evaluate the optical quality after laser in situ keratomileusis (LASIK) or serial photorefractive keratectomy (PRK) using a double-pass system and to follow the recovery of optical quality after laser vision correction. Methods This study measured the visual acuity, manifest refraction and optical quality before and one day, one week, one month, and three months after laser vision correction. Optical quality parameters including the modulation transfer function, Strehl ratio and intraocular scattering were evaluated with a double-pass system. Results This study included 51 eyes that underwent LASIK and 57 that underwent PRK. The optical quality three months post-surgery did not differ significantly between these laser vision correction techniques. Furthermore, the preoperative and postoperative optical quality did not differ significantly in either group. Optical quality recovered within one week after LASIK but took between one and three months to recover after PRK. The optical quality of patients in the PRK group seemed to recover slightly more slowly than their uncorrected distance visual acuity. Conclusions Optical quality recovers to the preoperative level after laser vision correction, so laser vision correction is efficacious for correcting myopia. The double-pass system is a useful tool for clinical assessment of optical quality. PMID:23908570
Understanding of and applications for robot vision guidance at KSC
NASA Technical Reports Server (NTRS)
Shawaga, Lawrence M.
1988-01-01
The primary thrust of robotics at KSC is for the servicing of Space Shuttle remote umbilical docking functions. In order for this to occur, robots performing servicing operations must be capable of tracking a swaying Orbiter in Six Degrees of Freedom (6-DOF). Currently, in NASA KSC's Robotic Applications Development Laboratory (RADL), an ASEA IRB-90 industrial robot is being equipped with a real-time computer vision (hardware and software) system to allow it to track a simulated Orbiter interface (target) in 6-DOF. The real-time computer vision system effectively becomes the eyes for the lab robot, guiding it through a closed loop visual feedback system to move with the simulated Orbiter interface. This paper will address an understanding of this vision guidance system and how it will be applied to remote umbilical servicing at KSC. In addition, other current and future applications will be addressed.
A laser-based vision system for weld quality inspection.
Huang, Wei; Kovacevic, Radovan
2011-01-01
Welding is a very complex process in which the final weld quality can be affected by many process parameters. In order to inspect the weld quality and detect the presence of various weld defects, different methods and systems are studied and developed. In this paper, a laser-based vision system is developed for non-destructive weld quality inspection. The vision sensor is designed based on the principle of laser triangulation. By processing the images acquired from the vision sensor, the geometrical features of the weld can be obtained. Through the visual analysis of the acquired 3D profiles of the weld, the presences as well as the positions and sizes of the weld defects can be accurately identified and therefore, the non-destructive weld quality inspection can be achieved.
A Laser-Based Vision System for Weld Quality Inspection
Huang, Wei; Kovacevic, Radovan
2011-01-01
Welding is a very complex process in which the final weld quality can be affected by many process parameters. In order to inspect the weld quality and detect the presence of various weld defects, different methods and systems are studied and developed. In this paper, a laser-based vision system is developed for non-destructive weld quality inspection. The vision sensor is designed based on the principle of laser triangulation. By processing the images acquired from the vision sensor, the geometrical features of the weld can be obtained. Through the visual analysis of the acquired 3D profiles of the weld, the presences as well as the positions and sizes of the weld defects can be accurately identified and therefore, the non-destructive weld quality inspection can be achieved. PMID:22344308
NASA Astrophysics Data System (ADS)
Astafiev, A.; Orlov, A.; Privezencev, D.
2018-01-01
The article is devoted to the development of technology and software for the construction of positioning and control systems in industrial plants based on aggregation to determine the current storage area using computer vision and radiofrequency identification. It describes the developed of the project of hardware for industrial products positioning system in the territory of a plant on the basis of radio-frequency grid. It describes the development of the project of hardware for industrial products positioning system in the plant on the basis of computer vision methods. It describes the development of the method of aggregation to determine the current storage area using computer vision and radiofrequency identification. Experimental studies in laboratory and production conditions have been conducted and described in the article.
The embedded software life cycle - An expanded view
NASA Technical Reports Server (NTRS)
Larman, Brian T.; Loesh, Robert E.
1989-01-01
Six common issues that are encountered in the development of software for embedded computer systems are discussed from the perspective of their interrelationships with the development process and/or the system itself. Particular attention is given to concurrent hardware/software development, prototyping, the inaccessibility of the operational system, fault tolerance, the long life cycle, and inheritance. It is noted that the life cycle for embedded software must include elements beyond simply the specification and implementation of the target software.
Embedded optical interconnect technology in data storage systems
NASA Astrophysics Data System (ADS)
Pitwon, Richard C. A.; Hopkins, Ken; Milward, Dave; Muggeridge, Malcolm
2010-05-01
As both data storage interconnect speeds increase and form factors in hard disk drive technologies continue to shrink, the density of printed channels on the storage array midplane goes up. The dominant interconnect protocol on storage array midplanes is expected to increase to 12 Gb/s by 2012 thereby exacerbating the performance bottleneck in future digital data storage systems. The design challenges inherent to modern data storage systems are discussed and an embedded optical infrastructure proposed to mitigate this bottleneck. The proposed solution is based on the deployment of an electro-optical printed circuit board and active interconnect technology. The connection architecture adopted would allow for electronic line cards with active optical edge connectors to be plugged into and unplugged from a passive electro-optical midplane with embedded polymeric waveguides. A demonstration platform has been developed to assess the viability of embedded electro-optical midplane technology in dense data storage systems and successfully demonstrated at 10.3 Gb/s. Active connectors incorporate optical transceiver interfaces operating at 850 nm and are connected in an in-plane coupling configuration to the embedded waveguides in the midplane. In addition a novel method of passively aligning and assembling passive optical devices to embedded polymer waveguide arrays has also been demonstrated.
Micro- and nano-NDE systems for aircraft: great things in small packages
NASA Astrophysics Data System (ADS)
Malas, James C.; Kropas-Hughes, Claudia V.; Blackshire, James L.; Moran, Thomas; Peeler, Deborah; Frazier, W. G.; Parker, Danny
2003-07-01
Recent advancements in small, microscopic NDE sensor technologies will revolutionize how aircraft maintenance is done, and will significantly improve the reliability and airworthiness of current and future aircraft systems. A variety of micro/nano systems and concepts are being developed that will enable whole new capabilities for detecting and tracking structural integrity damage. For aging aircraft systems, the impact of micro-NDE sensor technologies will be felt immediately, with dramatic reductions in labor for maintenance, and extended useable life of critical components being two of the primary benefits. For the fleet management of future aircraft systems, a comprehensive evaluation and tracking of vehicle health throughout its entire life cycle will be needed. Indeed, micro/nano NDE systems will be instrumental in realizing this futuristic vision. Several major challenges will need to be addressed, however, before micro- and nano-NDE systems can effectively be implemented, and this will require interdisciplinary research approaches, and a systematic engineering integration of the new technologies into real systems. Future research will need to emphasize systems engineering approaches for designing materials and structures with in-situ inspection and prognostic capabilities. Recent advances in 1) embedded / add-on micro-sensors, 2) computer modeling of nondestructive evaluation responses, and 3) wireless communications are important steps toward this goal, and will ultimately provide previously unimagined opportunities for realizing whole new integrated vehicle health monitoring capabilities. The future use of micro/nano NDE technologies as vehicle health monitoring tools will have profound implications, and will provide a revolutionary way of doing NDE in the near and distant future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hégely, Bence; Nagy, Péter R.; Kállay, Mihály, E-mail: kallay@mail.bme.hu
Exact schemes for the embedding of density functional theory (DFT) and wave function theory (WFT) methods into lower-level DFT or WFT approaches are introduced utilizing orbital localization. First, a simple modification of the projector-based embedding scheme of Manby and co-workers [J. Chem. Phys. 140, 18A507 (2014)] is proposed. We also use localized orbitals to partition the system, but instead of augmenting the Fock operator with a somewhat arbitrary level-shift projector we solve the Huzinaga-equation, which strictly enforces the Pauli exclusion principle. Second, the embedding of WFT methods in local correlation approaches is studied. Since the latter methods split up themore » system into local domains, very simple embedding theories can be defined if the domains of the active subsystem and the environment are treated at a different level. The considered embedding schemes are benchmarked for reaction energies and compared to quantum mechanics (QM)/molecular mechanics (MM) and vacuum embedding. We conclude that for DFT-in-DFT embedding, the Huzinaga-equation-based scheme is more efficient than the other approaches, but QM/MM or even simple vacuum embedding is still competitive in particular cases. Concerning the embedding of wave function methods, the clear winner is the embedding of WFT into low-level local correlation approaches, and WFT-in-DFT embedding can only be more advantageous if a non-hybrid density functional is employed.« less
GPU surface extraction using the closest point embedding
NASA Astrophysics Data System (ADS)
Kim, Mark; Hansen, Charles
2015-01-01
Isosurface extraction is a fundamental technique used for both surface reconstruction and mesh generation. One method to extract well-formed isosurfaces is a particle system; unfortunately, particle systems can be slow. In this paper, we introduce an enhanced parallel particle system that uses the closest point embedding as the surface representation to speedup the particle system for isosurface extraction. The closest point embedding is used in the Closest Point Method (CPM), a technique that uses a standard three dimensional numerical PDE solver on two dimensional embedded surfaces. To fully take advantage of the closest point embedding, it is coupled with a Barnes-Hut tree code on the GPU. This new technique produces well-formed, conformal unstructured triangular and tetrahedral meshes from labeled multi-material volume datasets. Further, this new parallel implementation of the particle system is faster than any known methods for conformal multi-material mesh extraction. The resulting speed-ups gained in this implementation can reduce the time from labeled data to mesh from hours to minutes and benefits users, such as bioengineers, who employ triangular and tetrahedral meshes
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Bailey, Randall E.; Prinzel, Lawrence J., III
2007-01-01
NASA is investigating revolutionary crew-vehicle interface technologies that strive to proactively overcome aircraft safety barriers that would otherwise constrain the full realization of the next-generation air transportation system. A fixed-based piloted simulation experiment was conducted to evaluate the complementary use of Synthetic and Enhanced Vision technologies. Specific focus was placed on new techniques for integration and/or fusion of Enhanced and Synthetic Vision and its impact within a two-crew flight deck on the crew's decision-making process during low-visibility approach and landing operations. Overall, the experimental data showed that significant improvements in situation awareness, without concomitant increases in workload and display clutter, could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying. During non-normal operations, the ability of the crew to handle substantial navigational errors and runway incursions were neither improved nor adversely impacted by the display concepts. The addition of Enhanced Vision may not, unto itself, provide an improvement in runway incursion detection without being specifically tailored for this application. Existing enhanced vision system procedures were effectively used in the crew decision-making process during approach and missed approach operations but having to forcibly transition from an excellent FLIR image to natural vision by 100 ft above field level was awkward for the pilot-flying.
Appendix B: Rapid development approaches for system engineering and design
NASA Technical Reports Server (NTRS)
1993-01-01
Conventional processes often produce systems which are obsolete before they are fielded. This paper explores some of the reasons for this, and provides a vision of how we can do better. This vision is based on our explorations in improved processes and system/software engineering tools.
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Ellis, Kyle K. E.; Bailey, Randall E.; Williams, Steven P.; Severance, Kurt; Le Vie, Lisa R.; Comstock, James R.
2014-01-01
Flight deck-based vision systems, such as Synthetic and Enhanced Vision System (SEVS) technologies, have the potential to provide additional margins of safety for aircrew performance and enable the implementation of operational improvements for low visibility surface, arrival, and departure operations in the terminal environment with equivalent efficiency to visual operations. To achieve this potential, research is required for effective technology development and implementation based upon human factors design and regulatory guidance. This research supports the introduction and use of Synthetic Vision Systems and Enhanced Flight Vision Systems (SVS/EFVS) as advanced cockpit vision technologies in Next Generation Air Transportation System (NextGen) operations. Twelve air transport-rated crews participated in a motion-base simulation experiment to evaluate the use of SVS/EFVS in NextGen low visibility approach and landing operations. Three monochromatic, collimated head-up display (HUD) concepts (conventional HUD, SVS HUD, and EFVS HUD) and two color head-down primary flight display (PFD) concepts (conventional PFD, SVS PFD) were evaluated in a simulated NextGen Chicago O'Hare terminal environment. Additionally, the instrument approach type (no offset, 3 degree offset, 15 degree offset) was experimentally varied to test the efficacy of the HUD concepts for offset approach operations. The data showed that touchdown landing performance were excellent regardless of SEVS concept or type of offset instrument approach being flown. Subjective assessments of mental workload and situation awareness indicated that making offset approaches in low visibility conditions with an EFVS HUD or SVS HUD may be feasible.
Vetrella, Amedeo Rodi; Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio
2016-01-01
Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS) receivers and Micro-Electro-Mechanical Systems (MEMS)-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS) receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase) exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision) to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information. PMID:27999318
Survey of computer vision-based natural disaster warning systems
NASA Astrophysics Data System (ADS)
Ko, ByoungChul; Kwak, Sooyeong
2012-07-01
With the rapid development of information technology, natural disaster prevention is growing as a new research field dealing with surveillance systems. To forecast and prevent the damage caused by natural disasters, the development of systems to analyze natural disasters using remote sensing geographic information systems (GIS), and vision sensors has been receiving widespread interest over the last decade. This paper provides an up-to-date review of five different types of natural disasters and their corresponding warning systems using computer vision and pattern recognition techniques such as wildfire smoke and flame detection, water level detection for flood prevention, coastal zone monitoring, and landslide detection. Finally, we conclude with some thoughts about future research directions.
Vision-guided micromanipulation system for biomedical application
NASA Astrophysics Data System (ADS)
Shim, Jae-Hong; Cho, Sung-Yong; Cha, Dong-Hyuk
2004-10-01
In these days, various researches for biomedical application of robots have been carried out. Particularly, robotic manipulation of the biological cells has been studied by many researchers. Usually, most of the biological cell's shape is sphere. Commercial biological manipulation systems have been utilized the 2-Dimensional images through the optical microscopes only. Moreover, manipulation of the biological cells mainly depends on the subjective viewpoint of an operator. Due to these reasons, there exist lots of problems such as slippery and destruction of the cell membrane and damage of the pipette tip etc. In order to overcome the problems, we have proposed a vision-guided biological cell manipulation system. The newly proposed manipulation system makes use of vision and graphic techniques. Through the proposed procedures, an operator can inject the biological cell scientifically and objectively. Also, the proposed manipulation system can measure the contact force occurred at injection of a biological cell. It can be transmitted a measured force to the operator by the proposed haptic device. Consequently, the proposed manipulation system could safely handle the biological cells without any damage. This paper presents the introduction of our vision-guided manipulation techniques and the concept of the contact force sensing. Through a series of experiments the proposed vision-guided manipulation system shows the possibility of application for precision manipulation of the biological cell such as DNA.
Spacesuit Data Display and Management System
NASA Technical Reports Server (NTRS)
Hall, David G.; Sells, Aaron; Shah, Hemal
2009-01-01
A prototype embedded avionics system has been designed for the next generation of NASA extra-vehicular-activity (EVA) spacesuits. The system performs biomedical and other sensor monitoring, image capture, data display, and data transmission. An existing NASA Phase I and II award winning design for an embedded computing system (ZIN vMetrics - BioWATCH) has been modified. The unit has a reliable, compact form factor with flexible packaging options. These innovations are significant, because current state-of-the-art EVA spacesuits do not provide capability for data displays or embedded data acquisition and management. The Phase 1 effort achieved Technology Readiness Level 4 (high fidelity breadboard demonstration). The breadboard uses a commercial-grade field-programmable gate array (FPGA) with embedded processor core that can be upgraded to a space-rated device for future revisions.
Dagnino, Giulio; Georgilas, Ioannis; Tarassoli, Payam; Atkins, Roger; Dogramadzi, Sanja
2016-03-01
Joint fracture surgery quality can be improved by robotic system with high-accuracy and high-repeatability fracture fragment manipulation. A new real-time vision-based system for fragment manipulation during robot-assisted fracture surgery was developed and tested. The control strategy was accomplished by merging fast open-loop control with vision-based control. This two-phase process is designed to eliminate the open-loop positioning errors by closing the control loop using visual feedback provided by an optical tracking system. Evaluation of the control system accuracy was performed using robot positioning trials, and fracture reduction accuracy was tested in trials on ex vivo porcine model. The system resulted in high fracture reduction reliability with a reduction accuracy of 0.09 mm (translations) and of [Formula: see text] (rotations), maximum observed errors in the order of 0.12 mm (translations) and of [Formula: see text] (rotations), and a reduction repeatability of 0.02 mm and [Formula: see text]. The proposed vision-based system was shown to be effective and suitable for real joint fracture surgical procedures, contributing a potential improvement of their quality.
Nested Interrupt Analysis of Low Cost and High Performance Embedded Systems Using GSPN Framework
NASA Astrophysics Data System (ADS)
Lin, Cheng-Min
Interrupt service routines are a key technology for embedded systems. In this paper, we introduce the standard approach for using Generalized Stochastic Petri Nets (GSPNs) as a high-level model for generating CTMC Continuous-Time Markov Chains (CTMCs) and then use Markov Reward Models (MRMs) to compute the performance for embedded systems. This framework is employed to analyze two embedded controllers with low cost and high performance, ARM7 and Cortex-M3. Cortex-M3 is designed with a tail-chaining mechanism to improve the performance of ARM7 when a nested interrupt occurs on an embedded controller. The Platform Independent Petri net Editor 2 (PIPE2) tool is used to model and evaluate the controllers in terms of power consumption and interrupt overhead performance. Using numerical results, in spite of the power consumption or interrupt overhead, Cortex-M3 performs better than ARM7.
Obstacles encountered in the development of the low vision enhancement system.
Massof, R W; Rickman, D L
1992-01-01
The Johns Hopkins Wilmer Eye Institute and the NASA Stennis Space Center are collaborating on the development of a new high technology low vision aid called the Low Vision Enhancement System (LVES). The LVES consists of a binocular head-mounted video display system, video cameras mounted on the head-mounted display, and real-time video image processing in a system package that is battery powered and portable. Through a phased development approach, several generations of the LVES can be made available to the patient in a timely fashion. This paper describes the LVES project with major emphasis on technical problems encountered or anticipated during the development process.
NASA Astrophysics Data System (ADS)
Singh, N. Nirmal; Chatterjee, Amitava; Rakshit, Anjan
2010-02-01
The present article describes the development of a peripheral interface controller (PIC) microcontroller-based system for interfacing external add-on peripherals with a real mobile robot, for real life applications. This system serves as an important building block of a complete integrated vision-based mobile robot system, integrated indigenously in our laboratory. The system is composed of the KOALA mobile robot in conjunction with a personal computer (PC) and a two-camera-based vision system where the PIC microcontroller is used to drive servo motors, in interrupt-driven mode, to control additional degrees of freedom of the vision system. The performance of the developed system is tested by checking it under the control of several user-specified commands, issued from the PC end.
Night vision: requirements and possible roadmap for FIR and NIR systems
NASA Astrophysics Data System (ADS)
Källhammer, Jan-Erik
2006-04-01
A night vision system must increase visibility in situations where only low beam headlights can be used today. As pedestrians and animals have the highest risk increase in night time traffic due to darkness, the ability of detecting those objects should be the main performance criteria, and the system must remain effective when facing the headlights of oncoming vehicles. Far infrared system has been shown to be superior to near infrared system in terms of pedestrian detection distance. Near infrared images were rated to have significantly higher visual clutter compared with far infrared images. Visual clutter has been shown to correlate with reduction in detection distance of pedestrians. Far infrared images are perceived as being more unusual and therefore more difficult to interpret, although the image appearance is likely related to the lower visual clutter. However, the main issue comparing the two technologies should be how well they solve the driver's problem with insufficient visibility under low beam conditions, especially of pedestrians and other vulnerable road users. With the addition of an automatic detection aid, a main issue will be whether the advantage of FIR systems will vanish given NIR systems with well performing automatic pedestrian detection functionality. The first night vision introductions did not generate the sales volumes initially expected. A renewed interest in night vision systems are however to be expected after the release of night vision systems by BMW, Mercedes and Honda, the latter with automatic pedestrian detection.
Accuracy improvement in a calibration test bench for accelerometers by a vision system
DOE Office of Scientific and Technical Information (OSTI.GOV)
D’Emilia, Giulio, E-mail: giulio.demilia@univaq.it; Di Gasbarro, David, E-mail: david.digasbarro@graduate.univaq.it; Gaspari, Antonella, E-mail: antonella.gaspari@graduate.univaq.it
2016-06-28
A procedure is described in this paper for the accuracy improvement of calibration of low-cost accelerometers in a prototype rotary test bench, driven by a brushless servo-motor and operating in a low frequency range of vibrations (0 to 5 Hz). Vibration measurements by a vision system based on a low frequency camera have been carried out, in order to reduce the uncertainty of the real acceleration evaluation at the installation point of the sensor to be calibrated. A preliminary test device has been realized and operated in order to evaluate the metrological performances of the vision system, showing a satisfactory behaviormore » if the uncertainty measurement is taken into account. A combination of suitable settings of the control parameters of the motion control system and of the information gained by the vision system allowed to fit the information about the reference acceleration at the installation point to the needs of the procedure for static and dynamic calibration of three-axis accelerometers.« less
NASA Astrophysics Data System (ADS)
Di, Si; Lin, Hui; Du, Ruxu
2011-05-01
Displacement measurement of moving objects is one of the most important issues in the field of computer vision. This paper introduces a new binocular vision system (BVS) based on micro-electro-mechanical system (MEMS) technology. The eyes of the system are two microlenses fabricated on a substrate by MEMS technology. The imaging results of two microlenses are collected by one complementary metal-oxide-semiconductor (CMOS) array. An algorithm is developed for computing the displacement. Experimental results show that as long as the object is moving in two-dimensional (2D) space, the system can effectively estimate the 2D displacement without camera calibration. It is also shown that the average error of the displacement measurement is about 3.5% at different object distances ranging from 10 cm to 35 cm. Because of its low cost, small size and simple setting, this new method is particularly suitable for 2D displacement measurement applications such as vision-based electronics assembly and biomedical cell culture.
Constraint Embedding for Multibody System Dynamics
NASA Technical Reports Server (NTRS)
Jain, Abhinandan
2009-01-01
This paper describes a constraint embedding approach for the handling of local closure constraints in multibody system dynamics. The approach uses spatial operator techniques to eliminate local-loop constraints from the system and effectively convert the system into tree-topology systems. This approach allows the direct derivation of recursive O(N) techniques for solving the system dynamics and avoiding the expensive steps that would otherwise be required for handling the closedchain dynamics. The approach is very effective for systems where the constraints are confined to small-subgraphs within the system topology. The paper provides background on the spatial operator O(N) algorithms, the extensions for handling embedded constraints, and concludes with some examples of such constraints.
Automated Grading of Rough Hardwood Lumber
Richard W. Conners; Tai-Hoon Cho; Philip A. Araman
1989-01-01
Any automatic hardwood grading system must have two components. The first of these is a computer vision system for locating and identifying defects on rough lumber. The second is a system for automatically grading boards based on the output of the computer vision system. This paper presents research results aimed at developing the first of these components. The...
A Multiple Sensor Machine Vision System for Automatic Hardwood Feature Detection
D. Earl Kline; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman; Robert L. Brisbin
1993-01-01
A multiple sensor machine vision prototype is being developed to scan full size hardwood lumber at industrial speeds for automatically detecting features such as knots holes, wane, stain, splits, checks, and color. The prototype integrates a multiple sensor imaging system, a materials handling system, a computer system, and application software. The prototype provides...
Implementing the President's Vision: JPL and NASA's Exploration Systems Mission Directorate
NASA Technical Reports Server (NTRS)
Sander, Michael J.
2006-01-01
As part of the NASA team the Jet Propulsion Laboratory is involved in the Exploration Systems Mission Directorate (ESMD) work to implement the President's Vision for Space exploration. In this slide presentation the roles that are assigned to the various NASA centers to implement the vision are reviewed. The plan for JPL is to use the Constellation program to advance the combination of science an Constellation program objectives. JPL's current participation is to contribute systems engineering support, Command, Control, Computing and Information (C3I) architecture, Crew Exploration Vehicle, (CEV) Thermal Protection System (TPS) project support/CEV landing assist support, Ground support systems support at JSC and KSC, Exploration Communication and Navigation System (ECANS), Flight prototypes for cabin atmosphere instruments
Jang, Yongwon; Noh, Hyung Wook; Lee, I B; Jung, Ji-Wook; Song, Yoonseon; Lee, Sooyeul; Kim, Seunghwan
2012-01-01
A patch type embedded cardiac function monitoring system was developed to detect arrhythmias such as PVC (Premature Ventricular Contraction), pause, ventricular fibrillation, and tachy/bradycardia. The overall system is composed of a main module including a dual processor and a Bluetooth telecommunication module. The dual microprocessor strategy minimizes power consumption and size, and guarantees the resources of embedded software programs. The developed software was verified with standard DB, and showed good performance.
Applied research of embedded WiFi technology in the motion capture system
NASA Astrophysics Data System (ADS)
Gui, Haixia
2012-04-01
Embedded wireless WiFi technology is one of the current wireless hot spots in network applications. This paper firstly introduces the definition and characteristics of WiFi. With the advantages of WiFi such as using no wiring, simple operation and stable transmission, this paper then gives a system design for the application of embedded wireless WiFi technology in the motion capture system. Also, it verifies the effectiveness of design in the WiFi-based wireless sensor hardware and software program.
Fusion of multichannel local and global structural cues for photo aesthetics evaluation.
Luming Zhang; Yue Gao; Zimmermann, Roger; Qi Tian; Xuelong Li
2014-03-01
Photo aesthetic quality evaluation is a fundamental yet under addressed task in computer vision and image processing fields. Conventional approaches are frustrated by the following two drawbacks. First, both the local and global spatial arrangements of image regions play an important role in photo aesthetics. However, existing rules, e.g., visual balance, heuristically define which spatial distribution among the salient regions of a photo is aesthetically pleasing. Second, it is difficult to adjust visual cues from multiple channels automatically in photo aesthetics assessment. To solve these problems, we propose a new photo aesthetics evaluation framework, focusing on learning the image descriptors that characterize local and global structural aesthetics from multiple visual channels. In particular, to describe the spatial structure of the image local regions, we construct graphlets small-sized connected graphs by connecting spatially adjacent atomic regions. Since spatially adjacent graphlets distribute closely in their feature space, we project them onto a manifold and subsequently propose an embedding algorithm. The embedding algorithm encodes the photo global spatial layout into graphlets. Simultaneously, the importance of graphlets from multiple visual channels are dynamically adjusted. Finally, these post-embedding graphlets are integrated for photo aesthetics evaluation using a probabilistic model. Experimental results show that: 1) the visualized graphlets explicitly capture the aesthetically arranged atomic regions; 2) the proposed approach generalizes and improves four prominent aesthetic rules; and 3) our approach significantly outperforms state-of-the-art algorithms in photo aesthetics prediction.
Night vision imaging systems design, integration, and verification in military fighter aircraft
NASA Astrophysics Data System (ADS)
Sabatini, Roberto; Richardson, Mark A.; Cantiello, Maurizio; Toscano, Mario; Fiorini, Pietro; Jia, Huamin; Zammit-Mangion, David
2012-04-01
This paper describes the developmental and testing activities conducted by the Italian Air Force Official Test Centre (RSV) in collaboration with Alenia Aerospace, Litton Precision Products and Cranfiled University, in order to confer the Night Vision Imaging Systems (NVIS) capability to the Italian TORNADO IDS (Interdiction and Strike) and ECR (Electronic Combat and Reconnaissance) aircraft. The activities consisted of various Design, Development, Test and Evaluation (DDT&E) activities, including Night Vision Goggles (NVG) integration, cockpit instruments and external lighting modifications, as well as various ground test sessions and a total of eighteen flight test sorties. RSV and Litton Precision Products were responsible of coordinating and conducting the installation activities of the internal and external lights. Particularly, an iterative process was established, allowing an in-site rapid correction of the major deficiencies encountered during the ground and flight test sessions. Both single-ship (day/night) and formation (night) flights were performed, shared between the Test Crews involved in the activities, allowing for a redundant examination of the various test items by all participants. An innovative test matrix was developed and implemented by RSV for assessing the operational suitability and effectiveness of the various modifications implemented. Also important was definition of test criteria for Pilot and Weapon Systems Officer (WSO) workload assessment during the accomplishment of various operational tasks during NVG missions. Furthermore, the specific technical and operational elements required for evaluating the modified helmets were identified, allowing an exhaustive comparative evaluation of the two proposed solutions (i.e., HGU-55P and HGU-55G modified helmets). The results of the activities were very satisfactory. The initial compatibility problems encountered were progressively mitigated by incorporating modifications both in the front and rear cockpits at the various stages of the test campaign. This process allowed a considerable enhancement of the TORNADO NVIS configuration, giving a good medium-high level NVG operational capability to the aircraft. Further developments also include the design, integration and test of internal/external lighting for the Italian TORNADO "Mid Life Update" (MLU) and other programs, such as the AM-X aircraft internal/external lights modification/testing and the activities addressing low-altitude NVG operations with fast jets (e.g., TORNADO, AM-X, MB-339CD), a major issue being the safe ejection of aircrew with NVG and NVG modified helmets. Two options have been identified for solving this problem: namely the modification of the current Gentex HGU-55 helmets and the design of a new helmet incorporating a reliable NVG connection/disconnection device (i.e., a mechanical system fully integrated in the helmet frame), with embedded automatic disconnection capability in case of ejection.
Embedded Hyperchaotic Generators: A Comparative Analysis
NASA Astrophysics Data System (ADS)
Sadoudi, Said; Tanougast, Camel; Azzaz, Mohamad Salah; Dandache, Abbas
In this paper, we present a comparative analysis of FPGA implementation performances, in terms of throughput and resources cost, of five well known autonomous continuous hyperchaotic systems. The goal of this analysis is to identify the embedded hyperchaotic generator which leads to designs with small logic area cost, satisfactory throughput rates, low power consumption and low latency required for embedded applications such as secure digital communications between embedded systems. To implement the four-dimensional (4D) chaotic systems, we use a new structural hardware architecture based on direct VHDL description of the forth order Runge-Kutta method (RK-4). The comparative analysis shows that the hyperchaotic Lorenz generator provides attractive performances compared to that of others. In fact, its hardware implementation requires only 2067 CLB-slices, 36 multipliers and no block RAMs, and achieves a throughput rate of 101.6 Mbps, at the output of the FPGA circuit, at a clock frequency of 25.315 MHz with a low latency time of 316 ns. Consequently, these good implementation performances offer to the embedded hyperchaotic Lorenz generator the advantage of being the best candidate for embedded communications applications.
A vision-based system for fast and accurate laser scanning in robot-assisted phonomicrosurgery.
Dagnino, Giulio; Mattos, Leonardo S; Caldwell, Darwin G
2015-02-01
Surgical quality in phonomicrosurgery can be improved by open-loop laser control (e.g., high-speed scanning capabilities) with a robust and accurate closed-loop visual servoing systems. A new vision-based system for laser scanning control during robot-assisted phonomicrosurgery was developed and tested. Laser scanning was accomplished with a dual control strategy, which adds a vision-based trajectory correction phase to a fast open-loop laser controller. The system is designed to eliminate open-loop aiming errors caused by system calibration limitations and by the unpredictable topology of real targets. Evaluation of the new system was performed using CO(2) laser cutting trials on artificial targets and ex-vivo tissue. This system produced accuracy values corresponding to pixel resolution even when smoke created by the laser-target interaction clutters the camera view. In realistic test scenarios, trajectory following RMS errors were reduced by almost 80 % with respect to open-loop system performances, reaching mean error values around 30 μ m and maximum observed errors in the order of 60 μ m. A new vision-based laser microsurgical control system was shown to be effective and promising with significant positive potential impact on the safety and quality of laser microsurgeries.
The Tactile Vision Substitution System: Applications in Education and Employment
ERIC Educational Resources Information Center
Scadden, Lawrence A.
1974-01-01
The Tactile Vision Substitution System converts the visual image from a narrow-angle television camera to a tactual image on a 5-inch square, 100-point display of vibrators placed against the abdomen of the blind person. (Author)
NASA Technical Reports Server (NTRS)
Kramer, Lynda J. (Compiler)
1999-01-01
The second NASA sponsored Workshop on Synthetic/Enhanced Vision (S/EV) Display Systems was conducted January 27-29, 1998 at the NASA Langley Research Center. The purpose of this workshop was to provide a forum for interested parties to discuss topics in the Synthetic Vision (SV) element of the NASA Aviation Safety Program and to encourage those interested parties to participate in the development, prototyping, and implementation of S/EV systems that enhance aviation safety. The SV element addresses the potential safety benefits of synthetic/enhanced vision display systems for low-end general aviation aircraft, high-end general aviation aircraft (business jets), and commercial transports. Attendance at this workshop consisted of about 112 persons including representatives from industry, the FAA, and other government organizations (NOAA, NIMA, etc.). The workshop provided opportunities for interested individuals to give presentations on the state of the art in potentially applicable systems, as well as to discuss areas of research that might be considered for inclusion within the Synthetic Vision Element program to contribute to the reduction of the fatal aircraft accident rate. Panel discussions on topical areas such as databases, displays, certification issues, and sensors were conducted, with time allowed for audience participation.
An assembly system based on industrial robot with binocular stereo vision
NASA Astrophysics Data System (ADS)
Tang, Hong; Xiao, Nanfeng
2017-01-01
This paper proposes an electronic part and component assembly system based on an industrial robot with binocular stereo vision. Firstly, binocular stereo vision with a visual attention mechanism model is used to get quickly the image regions which contain the electronic parts and components. Secondly, a deep neural network is adopted to recognize the features of the electronic parts and components. Thirdly, in order to control the end-effector of the industrial robot to grasp the electronic parts and components, a genetic algorithm (GA) is proposed to compute the transition matrix and the inverse kinematics of the industrial robot (end-effector), which plays a key role in bridging the binocular stereo vision and the industrial robot. Finally, the proposed assembly system is tested in LED component assembly experiments, and the results denote that it has high efficiency and good applicability.
NASA Astrophysics Data System (ADS)
Tramutola, A.; Paltro, D.; Cabalo Perucha, M. P.; Paar, G.; Steiner, J.; Barrio, A. M.
2015-09-01
Vision Based Navigation (VBNAV) has been identified as a valid technology to support space exploration because it can improve autonomy and safety of space missions. Several mission scenarios can benefit from the VBNAV: Rendezvous & Docking, Fly-Bys, Interplanetary cruise, Entry Descent and Landing (EDL) and Planetary Surface exploration. For some of them VBNAV can improve the accuracy in state estimation as additional relative navigation sensor or as absolute navigation sensor. For some others, like surface mobility and terrain exploration for path identification and planning, VBNAV is mandatory. This paper presents the general avionic architecture of a Vision Based System as defined in the frame of the ESA R&T study “Multi-purpose Vision-based Navigation System Engineering Model - part 1 (VisNav-EM-1)” with special focus on the surface mobility application.
Database Integrity Monitoring for Synthetic Vision Systems Using Machine Vision and SHADE
NASA Technical Reports Server (NTRS)
Cooper, Eric G.; Young, Steven D.
2005-01-01
In an effort to increase situational awareness, the aviation industry is investigating technologies that allow pilots to visualize what is outside of the aircraft during periods of low-visibility. One of these technologies, referred to as Synthetic Vision Systems (SVS), provides the pilot with real-time computer-generated images of obstacles, terrain features, runways, and other aircraft regardless of weather conditions. To help ensure the integrity of such systems, methods of verifying the accuracy of synthetically-derived display elements using onboard remote sensing technologies are under investigation. One such method is based on a shadow detection and extraction (SHADE) algorithm that transforms computer-generated digital elevation data into a reference domain that enables direct comparison with radar measurements. This paper describes machine vision techniques for making this comparison and discusses preliminary results from application to actual flight data.
Institutional Vision at Proprietary Schools: Advising for Profit
ERIC Educational Resources Information Center
Abelman, Robert; Dalessandro, Amy; Janstova, Patricie; Snyder-Suhy, Sharon
2007-01-01
A college or university's general approach to students and student support services, as reflected in its institutional vision, can serve to advocate the adoption of one type of advising structure, approach, and delivery system over another. A content analysis of a nationwide sample of institutional vision statements from NACADA-membership colleges…
NASA Astrophysics Data System (ADS)
Desai, Alok; Lee, Dah-Jye
2013-12-01
There has been significant research on the development of feature descriptors in the past few years. Most of them do not emphasize real-time applications. This paper presents the development of an affine invariant feature descriptor for low resource applications such as UAV and UGV that are equipped with an embedded system with a small microprocessor, a field programmable gate array (FPGA), or a smart phone device. UAV and UGV have proven suitable for many promising applications such as unknown environment exploration, search and rescue operations. These applications required on board image processing for obstacle detection, avoidance and navigation. All these real-time vision applications require a camera to grab images and match features using a feature descriptor. A good feature descriptor will uniquely describe a feature point thus allowing it to be correctly identified and matched with its corresponding feature point in another image. A few feature description algorithms are available for a resource limited system. They either require too much of the device's resource or too much simplification on the algorithm, which results in reduction in performance. This research is aimed at meeting the needs of these systems without sacrificing accuracy. This paper introduces a new feature descriptor called PRObabilistic model (PRO) for UGV navigation applications. It is a compact and efficient binary descriptor that is hardware-friendly and easy for implementation.
Singh, Anushikha; Dutta, Malay Kishore
2017-12-01
The authentication and integrity verification of medical images is a critical and growing issue for patients in e-health services. Accurate identification of medical images and patient verification is an essential requirement to prevent error in medical diagnosis. The proposed work presents an imperceptible watermarking system to address the security issue of medical fundus images for tele-ophthalmology applications and computer aided automated diagnosis of retinal diseases. In the proposed work, patient identity is embedded in fundus image in singular value decomposition domain with adaptive quantization parameter to maintain perceptual transparency for variety of fundus images like healthy fundus or disease affected image. In the proposed method insertion of watermark in fundus image does not affect the automatic image processing diagnosis of retinal objects & pathologies which ensure uncompromised computer-based diagnosis associated with fundus image. Patient ID is correctly recovered from watermarked fundus image for integrity verification of fundus image at the diagnosis centre. The proposed watermarking system is tested in a comprehensive database of fundus images and results are convincing. results indicate that proposed watermarking method is imperceptible and it does not affect computer vision based automated diagnosis of retinal diseases. Correct recovery of patient ID from watermarked fundus image makes the proposed watermarking system applicable for authentication of fundus images for computer aided diagnosis and Tele-ophthalmology applications. Copyright © 2017 Elsevier B.V. All rights reserved.
Sabel, Bernhard A; Wang, Jiaqi; Cárdenas-Morales, Lizbeth; Faiq, Muneeb; Heim, Christine
2018-06-01
The loss of vision after damage to the retina, optic nerve, or brain has often grave consequences in everyday life such as problems with recognizing faces, reading, or mobility. Because vision loss is considered to be irreversible and often progressive, patients experience continuous mental stress due to worries, anxiety, or fear with secondary consequences such as depression and social isolation. While prolonged mental stress is clearly a consequence of vision loss, it may also aggravate the situation. In fact, continuous stress and elevated cortisol levels negatively impact the eye and brain due to autonomous nervous system (sympathetic) imbalance and vascular dysregulation; hence stress may also be one of the major causes of visual system diseases such as glaucoma and optic neuropathy. Although stress is a known risk factor, its causal role in the development or progression of certain visual system disorders is not widely appreciated. This review of the literature discusses the relationship of stress and ophthalmological diseases. We conclude that stress is both consequence and cause of vision loss. This creates a vicious cycle of a downward spiral, in which initial vision loss creates stress which further accelerates vision loss, creating even more stress and so forth. This new psychosomatic perspective has several implications for clinical practice. Firstly, stress reduction and relaxation techniques (e.g., meditation, autogenic training, stress management training, and psychotherapy to learn to cope) should be recommended not only as complementary to traditional treatments of vision loss but possibly as preventive means to reduce progression of vision loss. Secondly, doctors should try their best to inculcate positivity and optimism in their patients while giving them the information the patients are entitled to, especially regarding the important value of stress reduction. In this way, the vicious cycle could be interrupted. More clinical studies are now needed to confirm the causal role of stress in different low vision diseases to evaluate the efficacy of different anti-stress therapies for preventing progression and improving vision recovery and restoration in randomized trials as a foundation of psychosomatic ophthalmology.
Improving Federal Education Programs through an Integrated Performance and Benchmarking System.
ERIC Educational Resources Information Center
Department of Education, Washington, DC. Office of the Under Secretary.
This document highlights the problems with current federal education program data collection activities and lists several factors that make movement toward a possible solution, then discusses the vision for the Integrated Performance and Benchmarking System (IPBS), a vision of an Internet-based system for harvesting information from states about…
Implementation of Automatic Focusing Algorithms for a Computer Vision System with Camera Control.
1983-08-15
obtainable from real data, rather than relying on a stock database. Often, computer vision and image processing algorithms become subconsciously tuned to...two coils on the same mount structure. Since it was not possible to reprogram the binary system, we turned to the POPEYE system for both its grey
Monitoring physiology and behavior using Android in phobias.
Cruz, Telmo; Brás, Susana; Soares, Sandra C; Fernandes, José Maria
2015-08-01
In this paper, we present an Android-based system Application - AWARE - for the assessment of the person's physiology and behavior outside of the laboratory. To accomplish this purpose, AWARE delivers context dependent audio-visual stimuli, embedded into the subject's real-world perception, via marker/vision-based augmented reality (AR) technology. In addition, it employs external measuring resources connected via Bluetooth, as well as the smartphone's integrated resources. It synchronously acquires the experiment's video (camera input with AR overlay), physiologic responses (with a dedicated ECG measuring device) and behavior (through movement and location, with accelerometer/gyroscope and GPS, respectively). Psychological assessment is heavily based on laboratory procedures, even though it is known that these settings disturb the subjects' natural reactions and condition. The major idea of this application is to evaluate the participant condition, mimicking his/her real life conditions. Given that phobias are rather context specific, they represent the ideal candidate for assessing the feasibility of a mobile system application. AWARE allowed presenting AR stimuli (e.g., 3D spiders) and quantifying the subjects' reactions non-intrusively (e.g., heart rate variation) - more emphatic in the phobic volunteer when presented with spider vs non phobic stimulus. Although still a proof of concept, AWARE proved to be flexible, and straightforward to setup, with the potential to support ecologically valid monitoring experiments.
Generic Dynamic Environment Perception Using Smart Mobile Devices.
Danescu, Radu; Itu, Razvan; Petrovai, Andra
2016-10-17
The driving environment is complex and dynamic, and the attention of the driver is continuously challenged, therefore computer based assistance achieved by processing image and sensor data may increase traffic safety. While active sensors and stereovision have the advantage of obtaining 3D data directly, monocular vision is easy to set up, and can benefit from the increasing computational power of smart mobile devices, and from the fact that almost all of them come with an embedded camera. Several driving assistance application are available for mobile devices, but they are mostly targeted for simple scenarios and a limited range of obstacle shapes and poses. This paper presents a technique for generic, shape independent real-time obstacle detection for mobile devices, based on a dynamic, free form 3D representation of the environment: the particle based occupancy grid. Images acquired in real time from the smart mobile device's camera are processed by removing the perspective effect and segmenting the resulted bird-eye view image to identify candidate obstacle areas, which are then used to update the occupancy grid. The occupancy grid tracked cells are grouped into obstacles depicted as cuboids having position, size, orientation and speed. The easy to set up system is able to reliably detect most obstacles in urban traffic, and its measurement accuracy is comparable to a stereovision system.
A vision-based approach for the direct measurement of displacements in vibrating systems
NASA Astrophysics Data System (ADS)
Mazen Wahbeh, A.; Caffrey, John P.; Masri, Sami F.
2003-10-01
This paper reports the results of an analytical and experimental study to develop, calibrate, implement and evaluate the feasibility of a novel vision-based approach for obtaining direct measurements of the absolute displacement time history at selectable locations of dispersed civil infrastructure systems such as long-span bridges. The measurements were obtained using a highly accurate camera in conjunction with a laser tracking reference. Calibration of the vision system was conducted in the lab to establish performance envelopes and data processing algorithms to extract the needed information from the captured vision scene. Subsequently, the monitoring apparatus was installed in the vicinity of the Vincent Thomas Bridge in the metropolitan Los Angeles region. This allowed the deployment of the instrumentation system under realistic conditions so as to determine field implementation issues that need to be addressed. It is shown that the proposed approach has the potential of leading to an economical and robust system for obtaining direct, simultaneous, measurements at several locations of the displacement time histories of realistic infrastructure systems undergoing complex three-dimensional deformations.
Computer graphics testbed to simulate and test vision systems for space applications
NASA Technical Reports Server (NTRS)
Cheatham, John B.; Wu, Chris K.; Lin, Y. H.
1991-01-01
A system was developed for displaying computer graphics images of space objects and the use of the system was demonstrated as a testbed for evaluating vision systems for space applications. In order to evaluate vision systems, it is desirable to be able to control all factors involved in creating the images used for processing by the vision system. Considerable time and expense is involved in building accurate physical models of space objects. Also, precise location of the model relative to the viewer and accurate location of the light source require additional effort. As part of this project, graphics models of space objects such as the Solarmax satellite are created that the user can control the light direction and the relative position of the object and the viewer. The work is also aimed at providing control of hue, shading, noise and shadows for use in demonstrating and testing imaging processing techniques. The simulated camera data can provide XYZ coordinates, pitch, yaw, and roll for the models. A physical model is also being used to provide comparison of camera images with the graphics images.