Machine vision and appearance based learning
NASA Astrophysics Data System (ADS)
Bernstein, Alexander
2017-03-01
Smart algorithms are used in Machine vision to organize or extract high-level information from the available data. The resulted high-level understanding the content of images received from certain visual sensing system and belonged to an appearance space can be only a key first step in solving various specific tasks such as mobile robot navigation in uncertain environments, road detection in autonomous driving systems, etc. Appearance-based learning has become very popular in the field of machine vision. In general, the appearance of a scene is a function of the scene content, the lighting conditions, and the camera position. Mobile robots localization problem in machine learning framework via appearance space analysis is considered. This problem is reduced to certain regression on an appearance manifold problem, and newly regression on manifolds methods are used for its solution.
Multispectral Image Processing for Plants
NASA Technical Reports Server (NTRS)
Miles, Gaines E.
1991-01-01
The development of a machine vision system to monitor plant growth and health is one of three essential steps towards establishing an intelligent system capable of accurately assessing the state of a controlled ecological life support system for long-term space travel. Besides a network of sensors, simulators are needed to predict plant features, and artificial intelligence algorithms are needed to determine the state of a plant based life support system. Multispectral machine vision and image processing can be used to sense plant features, including health and nutritional status.
Pyramidal neurovision architecture for vision machines
NASA Astrophysics Data System (ADS)
Gupta, Madan M.; Knopf, George K.
1993-08-01
The vision system employed by an intelligent robot must be active; active in the sense that it must be capable of selectively acquiring the minimal amount of relevant information for a given task. An efficient active vision system architecture that is based loosely upon the parallel-hierarchical (pyramidal) structure of the biological visual pathway is presented in this paper. Although the computational architecture of the proposed pyramidal neuro-vision system is far less sophisticated than the architecture of the biological visual pathway, it does retain some essential features such as the converging multilayered structure of its biological counterpart. In terms of visual information processing, the neuro-vision system is constructed from a hierarchy of several interactive computational levels, whereupon each level contains one or more nonlinear parallel processors. Computationally efficient vision machines can be developed by utilizing both the parallel and serial information processing techniques within the pyramidal computing architecture. A computer simulation of a pyramidal vision system for active scene surveillance is presented.
Machine Vision Applied to Navigation of Confined Spaces
NASA Technical Reports Server (NTRS)
Briscoe, Jeri M.; Broderick, David J.; Howard, Ricky; Corder, Eric L.
2004-01-01
The reliability of space related assets has been emphasized after the second loss of a Space Shuttle. The intricate nature of the hardware being inspected often requires a complete disassembly to perform a thorough inspection which can be difficult as well as costly. Furthermore, it is imperative that the hardware under inspection not be altered in any other manner than that which is intended. In these cases the use of machine vision can allow for inspection with greater frequency using less intrusive methods. Such systems can provide feedback to guide, not only manually controlled instrumentation, but autonomous robotic platforms as well. This paper serves to detail a method using machine vision to provide such sensing capabilities in a compact package. A single camera is used in conjunction with a projected reference grid to ascertain precise distance measurements. The design of the sensor focuses on the use of conventional components in an unconventional manner with the goal of providing a solution for systems that do not require or cannot accommodate more complex vision systems.
NASA Astrophysics Data System (ADS)
Jain, A. K.; Dorai, C.
Computer vision has emerged as a challenging and important area of research, both as an engineering and a scientific discipline. The growing importance of computer vision is evident from the fact that it was identified as one of the "Grand Challenges" and also from its prominent role in the National Information Infrastructure. While the design of a general-purpose vision system continues to be elusive machine vision systems are being used successfully in specific application elusive, machine vision systems are being used successfully in specific application domains. Building a practical vision system requires a careful selection of appropriate sensors, extraction and integration of information from available cues in the sensed data, and evaluation of system robustness and performance. The authors discuss and demonstrate advantages of (1) multi-sensor fusion, (2) combination of features and classifiers, (3) integration of visual modules, and (IV) admissibility and goal-directed evaluation of vision algorithms. The requirements of several prominent real world applications such as biometry, document image analysis, image and video database retrieval, and automatic object model construction offer exciting problems and new opportunities to design and evaluate vision algorithms.
Database Integrity Monitoring for Synthetic Vision Systems Using Machine Vision and SHADE
NASA Technical Reports Server (NTRS)
Cooper, Eric G.; Young, Steven D.
2005-01-01
In an effort to increase situational awareness, the aviation industry is investigating technologies that allow pilots to visualize what is outside of the aircraft during periods of low-visibility. One of these technologies, referred to as Synthetic Vision Systems (SVS), provides the pilot with real-time computer-generated images of obstacles, terrain features, runways, and other aircraft regardless of weather conditions. To help ensure the integrity of such systems, methods of verifying the accuracy of synthetically-derived display elements using onboard remote sensing technologies are under investigation. One such method is based on a shadow detection and extraction (SHADE) algorithm that transforms computer-generated digital elevation data into a reference domain that enables direct comparison with radar measurements. This paper describes machine vision techniques for making this comparison and discusses preliminary results from application to actual flight data.
Coupling sensing to crop models for closed-loop plant production in advanced life support systems
NASA Astrophysics Data System (ADS)
Cavazzoni, James; Ling, Peter P.
1999-01-01
We present a conceptual framework for coupling sensing to crop models for closed-loop analysis of plant production for NASA's program in advanced life support. Crop status may be monitored through non-destructive observations, while models may be independently applied to crop production planning and decision support. To achieve coupling, environmental variables and observations are linked to mode inputs and outputs, and monitoring results compared with model predictions of plant growth and development. The information thus provided may be useful in diagnosing problems with the plant growth system, or as a feedback to the model for evaluation of plant scheduling and potential yield. In this paper, we demonstrate this coupling using machine vision sensing of canopy height and top projected canopy area, and the CROPGRO crop growth model. Model simulations and scenarios are used for illustration. We also compare model predictions of the machine vision variables with data from soybean experiments conducted at New Jersey Agriculture Experiment Station Horticulture Greenhouse Facility, Rutgers University. Model simulations produce reasonable agreement with the available data, supporting our illustration.
Fusion of Multiple Sensing Modalities for Machine Vision
1994-05-31
Modeling of Non-Homogeneous 3-D Objects for Thermal and Visual Image Synthesis," Pattern Recognition, in press. U [11] Nair, Dinesh , and J. K. Aggarwal...20th AIPR Workshop: Computer Vision--Meeting the Challenges, McLean, Virginia, October 1991. Nair, Dinesh , and J. K. Aggarwal, "An Object Recognition...Computer Engineering August 1992 Sunil Gupta Ph.D. Student Mohan Kumar M.S. Student Sandeep Kumar M.S. Student Xavier Lebegue Ph.D., Computer
NASA Astrophysics Data System (ADS)
Hashimoto, Manabu; Fujino, Yozo
Image sensing technologies are expected as useful and effective way to suppress damages by criminals and disasters in highly safe and relieved society. In this paper, we describe current important subjects, required functions, technical trends, and a couple of real examples of developed system. As for the video surveillance, recognition of human trajectory and human behavior using image processing techniques are introduced with real examples about the violence detection for elevators. In the field of facility monitoring technologies as civil engineering, useful machine vision applications such as automatic detection of concrete cracks on walls of a building or recognition of crowded people on bridge for effective guidance in emergency are shown.
Machine intelligence and autonomy for aerospace systems
NASA Technical Reports Server (NTRS)
Heer, Ewald (Editor); Lum, Henry (Editor)
1988-01-01
The present volume discusses progress toward intelligent robot systems in aerospace applications, NASA Space Program automation and robotics efforts, the supervisory control of telerobotics in space, machine intelligence and crew/vehicle interfaces, expert-system terms and building tools, and knowledge-acquisition for autonomous systems. Also discussed are methods for validation of knowledge-based systems, a design methodology for knowledge-based management systems, knowledge-based simulation for aerospace systems, knowledge-based diagnosis, planning and scheduling methods in AI, the treatment of uncertainty in AI, vision-sensing techniques in aerospace applications, image-understanding techniques, tactile sensing for robots, distributed sensor integration, and the control of articulated and deformable space structures.
Algorithms and architectures for robot vision
NASA Technical Reports Server (NTRS)
Schenker, Paul S.
1990-01-01
The scope of the current work is to develop practical sensing implementations for robots operating in complex, partially unstructured environments. A focus in this work is to develop object models and estimation techniques which are specific to requirements of robot locomotion, approach and avoidance, and grasp and manipulation. Such problems have to date received limited attention in either computer or human vision - in essence, asking not only how perception is in general modeled, but also what is the functional purpose of its underlying representations. As in the past, researchers are drawing on ideas from both the psychological and machine vision literature. Of particular interest is the development 3-D shape and motion estimates for complex objects when given only partial and uncertain information and when such information is incrementally accrued over time. Current studies consider the use of surface motion, contour, and texture information, with the longer range goal of developing a fused sensing strategy based on these sources and others.
Machine vision extracted plant movement for early detection of plant water stress.
Kacira, M; Ling, P P; Short, T H
2002-01-01
A methodology was established for early, non-contact, and quantitative detection of plant water stress with machine vision extracted plant features. Top-projected canopy area (TPCA) of the plants was extracted from plant images using image-processing techniques. Water stress induced plant movement was decoupled from plant diurnal movement and plant growth using coefficient of relative variation of TPCA (CRV[TPCA)] and was found to be an effective marker for water stress detection. Threshold value of CRV(TPCA) as an indicator of water stress was determined by a parametric approach. The effectiveness of the sensing technique was evaluated against the timing of stress detection by an operator. Results of this study suggested that plant water stress detection using projected canopy area based features of the plants was feasible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moody, Daniela I.; Brumby, Steven P.; Rowland, Joel C.
Neuromimetic machine vision and pattern recognition algorithms are of great interest for landscape characterization and change detection in satellite imagery in support of global climate change science and modeling. We present results from an ongoing effort to extend machine vision methods to the environmental sciences, using adaptive sparse signal processing combined with machine learning. A Hebbian learning rule is used to build multispectral, multiresolution dictionaries from regional satellite normalized band difference index data. Land cover labels are automatically generated via our CoSA algorithm: Clustering of Sparse Approximations, using a clustering distance metric that combines spectral and spatial textural characteristics tomore » help separate geologic, vegetative, and hydrologie features. We demonstrate our method on example Worldview-2 satellite images of an Arctic region, and use CoSA labels to detect seasonal surface changes. In conclusion, our results suggest that neuroscience-based models are a promising approach to practical pattern recognition and change detection problems in remote sensing.« less
Moody, Daniela I.; Brumby, Steven P.; Rowland, Joel C.; ...
2014-10-01
Neuromimetic machine vision and pattern recognition algorithms are of great interest for landscape characterization and change detection in satellite imagery in support of global climate change science and modeling. We present results from an ongoing effort to extend machine vision methods to the environmental sciences, using adaptive sparse signal processing combined with machine learning. A Hebbian learning rule is used to build multispectral, multiresolution dictionaries from regional satellite normalized band difference index data. Land cover labels are automatically generated via our CoSA algorithm: Clustering of Sparse Approximations, using a clustering distance metric that combines spectral and spatial textural characteristics tomore » help separate geologic, vegetative, and hydrologie features. We demonstrate our method on example Worldview-2 satellite images of an Arctic region, and use CoSA labels to detect seasonal surface changes. In conclusion, our results suggest that neuroscience-based models are a promising approach to practical pattern recognition and change detection problems in remote sensing.« less
Illusory movement perception improves motor control for prosthetic hands
Marasco, Paul D.; Hebert, Jacqueline S.; Sensinger, Jon W.; Shell, Courtney E.; Schofield, Jonathon S.; Thumser, Zachary C.; Nataraj, Raviraj; Beckler, Dylan T.; Dawson, Michael R.; Blustein, Dan H.; Gill, Satinder; Mensh, Brett D.; Granja-Vazquez, Rafael; Newcomb, Madeline D.; Carey, Jason P.; Orzell, Beth M.
2018-01-01
To effortlessly complete an intentional movement, the brain needs feedback from the body regarding the movement’s progress. This largely non-conscious kinesthetic sense helps the brain to learn relationships between motor commands and outcomes to correct movement errors. Prosthetic systems for restoring function have predominantly focused on controlling motorized joint movement. Without the kinesthetic sense, however, these devices do not become intuitively controllable. Here we report a method for endowing human amputees with a kinesthetic perception of dexterous robotic hands. Vibrating the muscles used for prosthetic control via a neural-machine interface produced the illusory perception of complex grip movements. Within minutes, three amputees integrated this kinesthetic feedback and improved movement control. Combining intent, kinesthesia, and vision instilled participants with a sense of agency over the robotic movements. This feedback approach for closed-loop control opens a pathway to seamless integration of minds and machines. PMID:29540617
Modelling and representation issues in automated feature extraction from aerial and satellite images
NASA Astrophysics Data System (ADS)
Sowmya, Arcot; Trinder, John
New digital systems for the processing of photogrammetric and remote sensing images have led to new approaches to information extraction for mapping and Geographic Information System (GIS) applications, with the expectation that data can become more readily available at a lower cost and with greater currency. Demands for mapping and GIS data are increasing as well for environmental assessment and monitoring. Hence, researchers from the fields of photogrammetry and remote sensing, as well as computer vision and artificial intelligence, are bringing together their particular skills for automating these tasks of information extraction. The paper will review some of the approaches used in knowledge representation and modelling for machine vision, and give examples of their applications in research for image understanding of aerial and satellite imagery.
NASA Technical Reports Server (NTRS)
1998-01-01
Under a Jet Propulsion Laboratory SBIR (Small Business Innovative Research), Cambridge Research and Instrumentation Inc., developed a new class of filters for the construction of small, low-cost multispectral imagers. The VariSpec liquid crystal enables users to obtain multi-spectral, ultra-high resolution images using a monochrome CCD (charge coupled device) camera. Application areas include biomedical imaging, remote sensing, and machine vision.
Han, Wuxiao; Zhang, Linlin; He, Haoxuan; Liu, Hongmin; Xing, Lili; Xue, Xinyu
2018-06-22
The development of multifunctional electronic-skin that establishes human-machine interfaces, enhances perception abilities or has other distinct biomedical applications is the key to the realization of artificial intelligence. In this paper, a new self-powered (battery-free) flexible vision electronic-skin has been realized from pixel-patterned matrix of piezo-photodetecting PVDF/Ppy film. The electronic-skin under applied deformation can actively output piezoelectric voltage, and the outputting signal can be significantly influenced by UV illumination. The piezoelectric output can act as both the photodetecting signal and electricity power. The reliability is demonstrated over 200 light on-off cycles. The sensing unit matrix of 6 × 6 pixels on the electronic-skin can realize image recognition through mapping multi-point UV stimuli. This self-powered vision electronic-skin that simply mimics human retina may have potential application in vision substitution.
NASA Astrophysics Data System (ADS)
Han, Wuxiao; Zhang, Linlin; He, Haoxuan; Liu, Hongmin; Xing, Lili; Xue, Xinyu
2018-06-01
The development of multifunctional electronic-skin that establishes human-machine interfaces, enhances perception abilities or has other distinct biomedical applications is the key to the realization of artificial intelligence. In this paper, a new self-powered (battery-free) flexible vision electronic-skin has been realized from pixel-patterned matrix of piezo-photodetecting PVDF/Ppy film. The electronic-skin under applied deformation can actively output piezoelectric voltage, and the outputting signal can be significantly influenced by UV illumination. The piezoelectric output can act as both the photodetecting signal and electricity power. The reliability is demonstrated over 200 light on–off cycles. The sensing unit matrix of 6 × 6 pixels on the electronic-skin can realize image recognition through mapping multi-point UV stimuli. This self-powered vision electronic-skin that simply mimics human retina may have potential application in vision substitution.
Color line-scan technology in industrial applications
NASA Astrophysics Data System (ADS)
Lemstrom, Guy F.
1995-10-01
Color machine vision opens new possibilities for industrial on-line quality control applications. With color machine vision it's possible to detect different colors and shades, make color separation, spectroscopic applications and at the same time do measurements in the same way as with gray scale technology. These can be geometrical measurements such as dimensions, shape, texture etc. By combining these technologies in a color line scan camera, it brings the machine vision to new dimensions of realizing new applications and new areas in the machine vision business. Quality and process control requirements in the industry get more demanding every day. Color machine vision can be the solution for many simple tasks that haven't been realized with gray scale technology. The lack of detecting or measuring colors has been one reason why machine vision has not been used in quality control as much as it could have been. Color machine vision has shown a growing enthusiasm in the industrial machine vision applications. Potential areas of the industry include food, wood, mining and minerals, printing, paper, glass, plastic, recycling etc. Tasks are from simple measuring to total process and quality control. The color machine vision is not only for measuring colors. It can also be for contrast enhancement, object detection, background removing, structure detection and measuring. Color or spectral separation can be used in many different ways for working out machine vision application than before. It's only a question of how to use the benefits of having two or more data per measured pixel, instead of having only one as in case with traditional gray scale technology. There are plenty of potential applications already today that can be realized with color vision and it's going to give more performance to many traditional gray scale applications in the near future. But the most important feature is that color machine vision offers a new way of working out applications, where machine vision hasn't been applied before.
2013-09-30
constructed at BIO, carried the new Machine Vision Floc Camera (MVFC), a Sequoia Scientific LISST 100x Type B, an RBR CTD, and two pressure-actuated...WetStar CDOM fluorometer, a Sequoia Scientific flow control switch, and a SeaBird 37 CTD. The flow-control switch allows the ac- 9 to collect 0.2-um
Machine Vision Giving Eyes to Robots. Resources in Technology.
ERIC Educational Resources Information Center
Technology Teacher, 1990
1990-01-01
This module introduces machine vision, which can be used for inspection, robot guidance and part sorting. The future for machine vision will include new technology and will bring vision systems closer to the ultimate vision processor, the human eye. Includes a student quiz, outcomes, and activities. (JOW)
Machine vision for digital microfluidics
NASA Astrophysics Data System (ADS)
Shin, Yong-Jun; Lee, Jeong-Bong
2010-01-01
Machine vision is widely used in an industrial environment today. It can perform various tasks, such as inspecting and controlling production processes, that may require humanlike intelligence. The importance of imaging technology for biological research or medical diagnosis is greater than ever. For example, fluorescent reporter imaging enables scientists to study the dynamics of gene networks with high spatial and temporal resolution. Such high-throughput imaging is increasingly demanding the use of machine vision for real-time analysis and control. Digital microfluidics is a relatively new technology with expectations of becoming a true lab-on-a-chip platform. Utilizing digital microfluidics, only small amounts of biological samples are required and the experimental procedures can be automatically controlled. There is a strong need for the development of a digital microfluidics system integrated with machine vision for innovative biological research today. In this paper, we show how machine vision can be applied to digital microfluidics by demonstrating two applications: machine vision-based measurement of the kinetics of biomolecular interactions and machine vision-based droplet motion control. It is expected that digital microfluidics-based machine vision system will add intelligence and automation to high-throughput biological imaging in the future.
Illusory movement perception improves motor control for prosthetic hands.
Marasco, Paul D; Hebert, Jacqueline S; Sensinger, Jon W; Shell, Courtney E; Schofield, Jonathon S; Thumser, Zachary C; Nataraj, Raviraj; Beckler, Dylan T; Dawson, Michael R; Blustein, Dan H; Gill, Satinder; Mensh, Brett D; Granja-Vazquez, Rafael; Newcomb, Madeline D; Carey, Jason P; Orzell, Beth M
2018-03-14
To effortlessly complete an intentional movement, the brain needs feedback from the body regarding the movement's progress. This largely nonconscious kinesthetic sense helps the brain to learn relationships between motor commands and outcomes to correct movement errors. Prosthetic systems for restoring function have predominantly focused on controlling motorized joint movement. Without the kinesthetic sense, however, these devices do not become intuitively controllable. We report a method for endowing human amputees with a kinesthetic perception of dexterous robotic hands. Vibrating the muscles used for prosthetic control via a neural-machine interface produced the illusory perception of complex grip movements. Within minutes, three amputees integrated this kinesthetic feedback and improved movement control. Combining intent, kinesthesia, and vision instilled participants with a sense of agency over the robotic movements. This feedback approach for closed-loop control opens a pathway to seamless integration of minds and machines. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
Electronic Nose and Electronic Tongue
NASA Astrophysics Data System (ADS)
Bhattacharyya, Nabarun; Bandhopadhyay, Rajib
Human beings have five senses, namely, vision, hearing, touch, smell and taste. The sensors for vision, hearing and touch have been developed for several years. The need for sensors capable of mimicking the senses of smell and taste have been felt only recently in food industry, environmental monitoring and several industrial applications. In the ever-widening horizon of frontier research in the field of electronics and advanced computing, emergence of electronic nose (E-Nose) and electronic tongue (E-Tongue) have been drawing attention of scientists and technologists for more than a decade. By intelligent integration of multitudes of technologies like chemometrics, microelectronics and advanced soft computing, human olfaction has been successfully mimicked by such new techniques called machine olfaction (Pearce et al. 2002). But the very essence of such research and development efforts has centered on development of customized electronic nose and electronic tongue solutions specific to individual applications. In fact, research trends as of date clearly points to the fact that a machine olfaction system as versatile, universal and broadband as human nose and human tongue may not be feasible in the decades to come. But application specific solutions may definitely be demonstrated and commercialized by modulation in sensor design and fine-tuning the soft computing solutions. This chapter deals with theory, developments of E-Nose and E-Tongue technology and their applications. Also a succinct account of future trends of R&D efforts in this field with an objective of establishing co-relation between machine olfaction and human perception has been included.
ERIC Educational Resources Information Center
Chen, Kan; Stafford, Frank P.
A case study of machine vision was conducted to identify and analyze the employment effects of high technology in general. (Machine vision is the automatic acquisition and analysis of an image to obtain desired information for use in controlling an industrial activity, such as the visual sensor system that gives eyes to a robot.) Machine vision as…
The use of interactive computer vision and robot hand controllers for enhancing manufacturing safety
NASA Technical Reports Server (NTRS)
Marzwell, Neville I.; Jacobus, Charles J.; Peurach, Thomas M.; Mitchell, Brian T.
1994-01-01
Current available robotic systems provide limited support for CAD-based model-driven visualization, sensing algorithm development and integration, and automated graphical planning systems. This paper describes ongoing work which provides the functionality necessary to apply advanced robotics to automated manufacturing and assembly operations. An interface has been built which incorporates 6-DOF tactile manipulation, displays for three dimensional graphical models, and automated tracking functions which depend on automated machine vision. A set of tools for single and multiple focal plane sensor image processing and understanding has been demonstrated which utilizes object recognition models. The resulting tool will enable sensing and planning from computationally simple graphical objects. A synergistic interplay between human and operator vision is created from programmable feedback received from the controller. This approach can be used as the basis for implementing enhanced safety in automated robotics manufacturing, assembly, repair and inspection tasks in both ground and space applications. Thus, an interactive capability has been developed to match the modeled environment to the real task environment for safe and predictable task execution.
The use of interactive computer vision and robot hand controllers for enhancing manufacturing safety
NASA Astrophysics Data System (ADS)
Marzwell, Neville I.; Jacobus, Charles J.; Peurach, Thomas M.; Mitchell, Brian T.
1994-02-01
Current available robotic systems provide limited support for CAD-based model-driven visualization, sensing algorithm development and integration, and automated graphical planning systems. This paper describes ongoing work which provides the functionality necessary to apply advanced robotics to automated manufacturing and assembly operations. An interface has been built which incorporates 6-DOF tactile manipulation, displays for three dimensional graphical models, and automated tracking functions which depend on automated machine vision. A set of tools for single and multiple focal plane sensor image processing and understanding has been demonstrated which utilizes object recognition models. The resulting tool will enable sensing and planning from computationally simple graphical objects. A synergistic interplay between human and operator vision is created from programmable feedback received from the controller. This approach can be used as the basis for implementing enhanced safety in automated robotics manufacturing, assembly, repair and inspection tasks in both ground and space applications. Thus, an interactive capability has been developed to match the modeled environment to the real task environment for safe and predictable task execution.
Machine vision 1992-1996: technology program to promote research and its utilization in industry
NASA Astrophysics Data System (ADS)
Soini, Antti J.
1994-10-01
Machine vision technology has got a strong interest in Finnish research organizations, which is resulting in many innovative products to industry. Despite this end users were very skeptical towards machine vision and its robustness for harsh industrial environments. Therefore Technology Development Centre, TEKES, who funds technology related research and development projects in universities and individual companies, decided to start a national technology program, Machine Vision 1992 - 1996. Led by industry the program boosts research in machine vision technology and seeks to put the research results to work in practical industrial applications. The emphasis is in nationally important, demanding applications. The program will create new industry and business for machine vision producers and encourage the process and manufacturing industry to take advantage of this new technology. So far 60 companies and all major universities and research centers are working on our forty different projects. The key themes that we have are process control, robot vision and quality control.
The potential and prospects of proximal remote sensing of arthropod pests.
Nansen, Christian
2016-04-01
Bench-top or proximal remote sensing applications are widely used as part of quality control and machine vision systems in commercial operations. In addition, these technologies are becoming increasingly important in insect systematics and studies of insect physiology and pest management. This paper provides a review and discussion of how proximal remote sensing may contribute valuable quantitative information regarding identification of species, assessment of insect responses to insecticides, insect host responses to parasitoids and performance of biological control agents. The future role of proximal remote sensing is discussed as an exciting path for novel paths of multidisciplinary research among entomologists and scientists from a wide range of other disciplines, including image processing engineers, medical engineers, research pharmacists and computer scientists. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry.
Intelligent excavator control system for lunar mining system
NASA Astrophysics Data System (ADS)
Lever, Paul J. A.; Wang, Fei-Yue
1995-01-01
A major benefit of utilizing local planetary resources is that it reduces the need and cost of lifting materials from the Earth's surface into Earth orbit. The location of the moon makes it an ideal site for harvesting the materials needed to assist space activities. Here, lunar excavation will take place in the dynamic unstructured lunar environment, in which conditions are highly variable and unpredictable. Autonomous mining (excavation) machines are necessary to remove human operators from this hazardous environment. This machine must use a control system structure that can identify, plan, sense, and control real-time dynamic machine movements in the lunar environment. The solution is a vision-based hierarchical control structure. However, excavation tasks require force/torque sensor feedback to control the excavation tool after it has penetrated the surface. A fuzzy logic controller (FLC) is used to interpret the forces and torques gathered from a bucket mounted force/torque sensor during excavation. Experimental results from several excavation tests using the FLC are presented here. These results represent the first step toward an integrated sensing and control system for a lunar mining system.
Recognizing sights, smells, and sounds with gnostic fields.
Kanan, Christopher
2013-01-01
Mammals rely on vision, audition, and olfaction to remotely sense stimuli in their environment. Determining how the mammalian brain uses this sensory information to recognize objects has been one of the major goals of psychology and neuroscience. Likewise, researchers in computer vision, machine audition, and machine olfaction have endeavored to discover good algorithms for stimulus classification. Almost 50 years ago, the neuroscientist Jerzy Konorski proposed a theoretical model in his final monograph in which competing sets of "gnostic" neurons sitting atop sensory processing hierarchies enabled stimuli to be robustly categorized, despite variations in their presentation. Much of what Konorski hypothesized has been remarkably accurate, and neurons with gnostic-like properties have been discovered in visual, aural, and olfactory brain regions. Surprisingly, there have not been any attempts to directly transform his theoretical model into a computational one. Here, I describe the first computational implementation of Konorski's theory. The model is not domain specific, and it surpasses the best machine learning algorithms on challenging image, music, and olfactory classification tasks, while also being simpler. My results suggest that criticisms of exemplar-based models of object recognition as being computationally intractable due to limited neural resources are unfounded.
Recognizing Sights, Smells, and Sounds with Gnostic Fields
Kanan, Christopher
2013-01-01
Mammals rely on vision, audition, and olfaction to remotely sense stimuli in their environment. Determining how the mammalian brain uses this sensory information to recognize objects has been one of the major goals of psychology and neuroscience. Likewise, researchers in computer vision, machine audition, and machine olfaction have endeavored to discover good algorithms for stimulus classification. Almost 50 years ago, the neuroscientist Jerzy Konorski proposed a theoretical model in his final monograph in which competing sets of “gnostic” neurons sitting atop sensory processing hierarchies enabled stimuli to be robustly categorized, despite variations in their presentation. Much of what Konorski hypothesized has been remarkably accurate, and neurons with gnostic-like properties have been discovered in visual, aural, and olfactory brain regions. Surprisingly, there have not been any attempts to directly transform his theoretical model into a computational one. Here, I describe the first computational implementation of Konorski's theory. The model is not domain specific, and it surpasses the best machine learning algorithms on challenging image, music, and olfactory classification tasks, while also being simpler. My results suggest that criticisms of exemplar-based models of object recognition as being computationally intractable due to limited neural resources are unfounded. PMID:23365648
Binary pressure-sensitive paint measurements using miniaturised, colour, machine vision cameras
NASA Astrophysics Data System (ADS)
Quinn, Mark Kenneth
2018-05-01
Recent advances in machine vision technology and capability have led to machine vision cameras becoming applicable for scientific imaging. This study aims to demonstrate the applicability of machine vision colour cameras for the measurement of dual-component pressure-sensitive paint (PSP). The presence of a second luminophore component in the PSP mixture significantly reduces its inherent temperature sensitivity, increasing its applicability at low speeds. All of the devices tested are smaller than the cooled CCD cameras traditionally used and most are of significantly lower cost, thereby increasing the accessibility of such technology and techniques. Comparisons between three machine vision cameras, a three CCD camera, and a commercially available specialist PSP camera are made on a range of parameters, and a detailed PSP calibration is conducted in a static calibration chamber. The findings demonstrate that colour machine vision cameras can be used for quantitative, dual-component, pressure measurements. These results give rise to the possibility of performing on-board dual-component PSP measurements in wind tunnels or on real flight/road vehicles.
Industrial Inspection with Open Eyes: Advance with Machine Vision Technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Zheng; Ukida, H.; Niel, Kurt
Machine vision systems have evolved significantly with the technology advances to tackle the challenges from modern manufacturing industry. A wide range of industrial inspection applications for quality control are benefiting from visual information captured by different types of cameras variously configured in a machine vision system. This chapter screens the state of the art in machine vision technologies in the light of hardware, software tools, and major algorithm advances for industrial inspection. The inspection beyond visual spectrum offers a significant complementary to the visual inspection. The combination with multiple technologies makes it possible for the inspection to achieve a bettermore » performance and efficiency in varied applications. The diversity of the applications demonstrates the great potential of machine vision systems for industry.« less
Machine vision systems using machine learning for industrial product inspection
NASA Astrophysics Data System (ADS)
Lu, Yi; Chen, Tie Q.; Chen, Jie; Zhang, Jian; Tisler, Anthony
2002-02-01
Machine vision inspection requires efficient processing time and accurate results. In this paper, we present a machine vision inspection architecture, SMV (Smart Machine Vision). SMV decomposes a machine vision inspection problem into two stages, Learning Inspection Features (LIF), and On-Line Inspection (OLI). The LIF is designed to learn visual inspection features from design data and/or from inspection products. During the OLI stage, the inspection system uses the knowledge learnt by the LIF component to inspect the visual features of products. In this paper we will present two machine vision inspection systems developed under the SMV architecture for two different types of products, Printed Circuit Board (PCB) and Vacuum Florescent Displaying (VFD) boards. In the VFD board inspection system, the LIF component learns inspection features from a VFD board and its displaying patterns. In the PCB board inspection system, the LIF learns the inspection features from the CAD file of a PCB board. In both systems, the LIF component also incorporates interactive learning to make the inspection system more powerful and efficient. The VFD system has been deployed successfully in three different manufacturing companies and the PCB inspection system is the process of being deployed in a manufacturing plant.
Ethical, environmental and social issues for machine vision in manufacturing industry
NASA Astrophysics Data System (ADS)
Batchelor, Bruce G.; Whelan, Paul F.
1995-10-01
Some of the ethical, environmental and social issues relating to the design and use of machine vision systems in manufacturing industry are highlighted. The authors' aim is to emphasize some of the more important issues, and raise general awareness of the need to consider the potential advantages and hazards of machine vision technology. However, in a short article like this, it is impossible to cover the subject comprehensively. This paper should therefore be seen as a discussion document, which it is hoped will provoke more detailed consideration of these very important issues. It follows from an article presented at last year's workshop. Five major topics are discussed: (1) The impact of machine vision systems on the environment; (2) The implications of machine vision for product and factory safety, the health and well-being of employees; (3) The importance of intellectual integrity in a field requiring a careful balance of advanced ideas and technologies; (4) Commercial and managerial integrity; and (5) The impact of machine visions technology on employment prospects, particularly for people with low skill levels.
Basic design principles of colorimetric vision systems
NASA Astrophysics Data System (ADS)
Mumzhiu, Alex M.
1998-10-01
Color measurement is an important part of overall production quality control in textile, coating, plastics, food, paper and other industries. The color measurement instruments such as colorimeters and spectrophotometers, used for production quality control have many limitations. In many applications they cannot be used for a variety of reasons and have to be replaced with human operators. Machine vision has great potential for color measurement. The components for color machine vision systems, such as broadcast quality 3-CCD cameras, fast and inexpensive PCI frame grabbers, and sophisticated image processing software packages are available. However the machine vision industry has only started to approach the color domain. The few color machine vision systems on the market, produced by the largest machine vision manufacturers have very limited capabilities. A lack of understanding that a vision based color measurement system could fail if it ignores the basic principles of colorimetry is the main reason for the slow progress of color vision systems. the purpose of this paper is to clarify how color measurement principles have to be applied to vision systems and how the electro-optical design features of colorimeters have to be modified in order to implement them for vision systems. The subject of this presentation far exceeds the limitations of a journal paper so only the most important aspects will be discussed. An overview of the major areas of applications for colorimetric vision system will be discussed. Finally, the reasons why some customers are happy with their vision systems and some are not will be analyzed.
Biomimetic machine vision system.
Harman, William M; Barrett, Steven F; Wright, Cameron H G; Wilcox, Michael
2005-01-01
Real-time application of digital imaging for use in machine vision systems has proven to be prohibitive when used within control systems that employ low-power single processors without compromising the scope of vision or resolution of captured images. Development of a real-time machine analog vision system is the focus of research taking place at the University of Wyoming. This new vision system is based upon the biological vision system of the common house fly. Development of a single sensor is accomplished, representing a single facet of the fly's eye. This new sensor is then incorporated into an array of sensors capable of detecting objects and tracking motion in 2-D space. This system "preprocesses" incoming image data resulting in minimal data processing to determine the location of a target object. Due to the nature of the sensors in the array, hyperacuity is achieved thereby eliminating resolutions issues found in digital vision systems. In this paper, we will discuss the biological traits of the fly eye and the specific traits that led to the development of this machine vision system. We will also discuss the process of developing an analog based sensor that mimics the characteristics of interest in the biological vision system. This paper will conclude with a discussion of how an array of these sensors can be applied toward solving real-world machine vision issues.
Task-focused modeling in automated agriculture
NASA Astrophysics Data System (ADS)
Vriesenga, Mark R.; Peleg, K.; Sklansky, Jack
1993-01-01
Machine vision systems analyze image data to carry out automation tasks. Our interest is in machine vision systems that rely on models to achieve their designed task. When the model is interrogated from an a priori menu of questions, the model need not be complete. Instead, the machine vision system can use a partial model that contains a large amount of information in regions of interest and less information elsewhere. We propose an adaptive modeling scheme for machine vision, called task-focused modeling, which constructs a model having just sufficient detail to carry out the specified task. The model is detailed in regions of interest to the task and is less detailed elsewhere. This focusing effect saves time and reduces the computational effort expended by the machine vision system. We illustrate task-focused modeling by an example involving real-time micropropagation of plants in automated agriculture.
A Machine Vision System for Automatically Grading Hardwood Lumber - (Proceedings)
Richard W. Conners; Tai-Hoon Cho; Chong T. Ng; Thomas H. Drayer; Joe G. Tront; Philip A. Araman; Robert L. Brisbon
1990-01-01
Any automatic system for grading hardwood lumber can conceptually be divided into two components. One of these is a machine vision system for locating and identifying grading defects. The other is an automatic grading program that accepts as input the output of the machine vision system and, based on these data, determines the grade of a board. The progress that has...
Using Multiple FPGA Architectures for Real-time Processing of Low-level Machine Vision Functions
Thomas H. Drayer; William E. King; Philip A. Araman; Joseph G. Tront; Richard W. Conners
1995-01-01
In this paper, we investigate the use of multiple Field Programmable Gate Array (FPGA) architectures for real-time machine vision processing. The use of FPGAs for low-level processing represents an excellent tradeoff between software and special purpose hardware implementations. A library of modules that implement common low-level machine vision operations is presented...
Machine vision for real time orbital operations
NASA Technical Reports Server (NTRS)
Vinz, Frank L.
1988-01-01
Machine vision for automation and robotic operation of Space Station era systems has the potential for increasing the efficiency of orbital servicing, repair, assembly and docking tasks. A machine vision research project is described in which a TV camera is used for inputing visual data to a computer so that image processing may be achieved for real time control of these orbital operations. A technique has resulted from this research which reduces computer memory requirements and greatly increases typical computational speed such that it has the potential for development into a real time orbital machine vision system. This technique is called AI BOSS (Analysis of Images by Box Scan and Syntax).
Robust human machine interface based on head movements applied to assistive robotics.
Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano
2013-01-01
This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair.
Robust Human Machine Interface Based on Head Movements Applied to Assistive Robotics
Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano
2013-01-01
This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair. PMID:24453877
Applications of lasers to production metrology, control, and machine 'Vision'
NASA Astrophysics Data System (ADS)
Pryor, T. R.; Erf, R. K.; Gara, A. D.
1982-06-01
General areas of laser application to production measurement and inspection are reviewed together with the associated laser measurement techniques. The topics discussed include dimensional gauging of part profiles using laser imaging or scanning techniques, laser triangulation for surface contour measurement, surface finish measurement and defect inspection, holography and speckle techniques, and strain measurement. The emerging field of robot guidance utilizing lasers and other sensing means is examined, and, finally, the use of laser marking and reading equipment is briefly discussed.
Knowledge-based vision and simple visual machines.
Cliff, D; Noble, J
1997-01-01
The vast majority of work in machine vision emphasizes the representation of perceived objects and events: it is these internal representations that incorporate the 'knowledge' in knowledge-based vision or form the 'models' in model-based vision. In this paper, we discuss simple machine vision systems developed by artificial evolution rather than traditional engineering design techniques, and note that the task of identifying internal representations within such systems is made difficult by the lack of an operational definition of representation at the causal mechanistic level. Consequently, we question the nature and indeed the existence of representations posited to be used within natural vision systems (i.e. animals). We conclude that representations argued for on a priori grounds by external observers of a particular vision system may well be illusory, and are at best place-holders for yet-to-be-identified causal mechanistic interactions. That is, applying the knowledge-based vision approach in the understanding of evolved systems (machines or animals) may well lead to theories and models that are internally consistent, computationally plausible, and entirely wrong. PMID:9304684
Machine vision system for inspecting characteristics of hybrid rice seed
NASA Astrophysics Data System (ADS)
Cheng, Fang; Ying, Yibin
2004-03-01
Obtaining clear images advantaged of improving the classification accuracy involves many factors, light source, lens extender and background were discussed in this paper. The analysis of rice seed reflectance curves showed that the wavelength of light source for discrimination of the diseased seeds from normal rice seeds in the monochromic image recognition mode was about 815nm for jinyou402 and shanyou10. To determine optimizing conditions for acquiring digital images of rice seed using a computer vision system, an adjustable color machine vision system was developed. The machine vision system with 20mm to 25mm lens extender produce close-up images which made it easy to object recognition of characteristics in hybrid rice seeds. White background was proved to be better than black background for inspecting rice seeds infected by disease and using the algorithms based on shape. Experimental results indicated good classification for most of the characteristics with the machine vision system. The same algorithm yielded better results in optimizing condition for quality inspection of rice seed. Specifically, the image processing can correct for details such as fine fissure with the machine vision system.
A Machine Vision System for Automatically Grading Hardwood Lumber - (Industrial Metrology)
Richard W. Conners; Tai-Hoon Cho; Chong T. Ng; Thomas T. Drayer; Philip A. Araman; Robert L. Brisbon
1992-01-01
Any automatic system for grading hardwood lumber can conceptually be divided into two components. One of these is a machine vision system for locating and identifying grading defects. The other is an automatic grading program that accepts as input the output of the machine vision system and, based on these data, determines the grade of a board. The progress that has...
NASA Technical Reports Server (NTRS)
Ross, Kenton W.; McKellip, Rodney D.
2005-01-01
Topics covered include: Implementation and Validation of Sensor-Based Site-Specific Crop Management; Enhanced Management of Agricultural Perennial Systems (EMAPS) Using GIS and Remote Sensing; Validation and Application of Geospatial Information for Early Identification of Stress in Wheat; Adapting and Validating Precision Technologies for Cotton Production in the Mid-Southern United States - 2004 Progress Report; Development of a System to Automatically Geo-Rectify Images; Economics of Precision Agriculture Technologies in Cotton Production-AG 2020 Prescription Farming Automation Algorithms; Field Testing a Sensor-Based Applicator for Nitrogen and Phosphorus Application; Early Detection of Citrus Diseases Using Machine Vision and DGPS; Remote Sensing of Citrus Tree Stress Levels and Factors; Spectral-based Nitrogen Sensing for Citrus; Characterization of Tree Canopies; In-field Sensing of Shallow Water Tables and Hydromorphic Soils with an Electromagnetic Induction Profiler; Maintaining the Competitiveness of Tree Fruit Production Through Precision Agriculture; Modeling and Visualizing Terrain and Remote Sensing Data for Research and Education in Precision Agriculture; Thematic Soil Mapping and Crop-Based Strategies for Site-Specific Management; and Crop-Based Strategies for Site-Specific Management.
Camera-based micro interferometer for distance sensing
NASA Astrophysics Data System (ADS)
Will, Matthias; Schädel, Martin; Ortlepp, Thomas
2017-12-01
Interference of light provides a high precision, non-contact and fast method for measurement method for distances. Therefore this technology dominates in high precision systems. However, in the field of compact sensors capacitive, resistive or inductive methods dominates. The reason is, that the interferometric system has to be precise adjusted and needs a high mechanical stability. As a result, we have usual high-priced complex systems not suitable in the field of compact sensors. To overcome these we developed a new concept for a very small interferometric sensing setup. We combine a miniaturized laser unit, a low cost pixel detector and machine vision routines to realize a demonstrator for a Michelson type micro interferometer. We demonstrate a low cost sensor smaller 1cm3 including all electronics and demonstrate distance sensing up to 30 cm and resolution in nm range.
Color line scan camera technology and machine vision: requirements to consider
NASA Astrophysics Data System (ADS)
Paernaenen, Pekka H. T.
1997-08-01
Color machine vision has shown a dynamic uptrend in use within the past few years as the introduction of new cameras and scanner technologies itself underscores. In the future, the movement from monochrome imaging to color will hasten, as machine vision system users demand more knowledge about their product stream. As color has come to the machine vision, certain requirements for the equipment used to digitize color images are needed. Color machine vision needs not only a good color separation but also a high dynamic range and a good linear response from the camera used. Good dynamic range and linear response is necessary for color machine vision. The importance of these features becomes even more important when the image is converted to another color space. There is always lost some information when converting integer data to another form. Traditionally the color image processing has been much slower technique than the gray level image processing due to the three times greater data amount per image. The same has applied for the three times more memory needed. The advancements in computers, memory and processing units has made it possible to handle even large color images today cost efficiently. In some cases he image analysis in color images can in fact even be easier and faster than with a similar gray level image because of more information per pixel. Color machine vision sets new requirements for lighting, too. High intensity and white color light is required in order to acquire good images for further image processing or analysis. New development in lighting technology is bringing eventually solutions for color imaging.
Machine Learning, deep learning and optimization in computer vision
NASA Astrophysics Data System (ADS)
Canu, Stéphane
2017-03-01
As quoted in the Large Scale Computer Vision Systems NIPS workshop, computer vision is a mature field with a long tradition of research, but recent advances in machine learning, deep learning, representation learning and optimization have provided models with new capabilities to better understand visual content. The presentation will go through these new developments in machine learning covering basic motivations, ideas, models and optimization in deep learning for computer vision, identifying challenges and opportunities. It will focus on issues related with large scale learning that is: high dimensional features, large variety of visual classes, and large number of examples.
Compact VLSI neural computer integrated with active pixel sensor for real-time ATR applications
NASA Astrophysics Data System (ADS)
Fang, Wai-Chi; Udomkesmalee, Gabriel; Alkalai, Leon
1997-04-01
A compact VLSI neural computer integrated with an active pixel sensor has been under development to mimic what is inherent in biological vision systems. This electronic eye- brain computer is targeted for real-time machine vision applications which require both high-bandwidth communication and high-performance computing for data sensing, synergy of multiple types of sensory information, feature extraction, target detection, target recognition, and control functions. The neural computer is based on a composite structure which combines Annealing Cellular Neural Network (ACNN) and Hierarchical Self-Organization Neural Network (HSONN). The ACNN architecture is a programmable and scalable multi- dimensional array of annealing neurons which are locally connected with their local neurons. Meanwhile, the HSONN adopts a hierarchical structure with nonlinear basis functions. The ACNN+HSONN neural computer is effectively designed to perform programmable functions for machine vision processing in all levels with its embedded host processor. It provides a two order-of-magnitude increase in computation power over the state-of-the-art microcomputer and DSP microelectronics. A compact current-mode VLSI design feasibility of the ACNN+HSONN neural computer is demonstrated by a 3D 16X8X9-cube neural processor chip design in a 2-micrometers CMOS technology. Integration of this neural computer as one slice of a 4'X4' multichip module into the 3D MCM based avionics architecture for NASA's New Millennium Program is also described.
Your Baby's Hearing, Vision, and Other Senses: 7 Months
... Videos for Educators Search English Español Your Baby's Hearing, Vision, and Other Senses: 7 Months KidsHealth / For Parents / Your Baby's Hearing, Vision, and Other Senses: 7 Months Print en ...
Your Baby's Hearing, Vision, and Other Senses: 11 Months
... Videos for Educators Search English Español Your Baby's Hearing, Vision, and Other Senses: 11 Months KidsHealth / For Parents / Your Baby's Hearing, Vision, and Other Senses: 11 Months Print en ...
NASA Astrophysics Data System (ADS)
Skrzypek, Josef; Mesrobian, Edmond; Gungner, David J.
1989-03-01
The development of autonomous land vehicles (ALV) capable of operating in an unconstrained environment has proven to be a formidable research effort. The unpredictability of events in such an environment calls for the design of a robust perceptual system, an impossible task requiring the programming of a system bases on the expectation of future, unconstrained events. Hence, the need for a "general purpose" machine vision system that is capable of perceiving and understanding images in an unconstrained environment in real-time. The research undertaken at the UCLA Machine Perception Laboratory addresses this need by focusing on two specific issues: 1) the long term goals for machine vision research as a joint effort between the neurosciences and computer science; and 2) a framework for evaluating progress in machine vision. In the past, vision research has been carried out independently within different fields including neurosciences, psychology, computer science, and electrical engineering. Our interdisciplinary approach to vision research is based on the rigorous combination of computational neuroscience, as derived from neurophysiology and neuropsychology, with computer science and electrical engineering. The primary motivation behind our approach is that the human visual system is the only existing example of a "general purpose" vision system and using a neurally based computing substrate, it can complete all necessary visual tasks in real-time.
A Machine Vision Quality Control System for Industrial Acrylic Fibre Production
NASA Astrophysics Data System (ADS)
Heleno, Paulo; Davies, Roger; Correia, Bento A. Brázio; Dinis, João
2002-12-01
This paper describes the implementation of INFIBRA, a machine vision system used in the quality control of acrylic fibre production. The system was developed by INETI under a contract with a leading industrial manufacturer of acrylic fibres. It monitors several parameters of the acrylic production process. This paper presents, after a brief overview of the system, a detailed description of the machine vision algorithms developed to perform the inspection tasks unique to this system. Some of the results of online operation are also presented.
Machine Vision Systems for Processing Hardwood Lumber and Logs
Philip A. Araman; Daniel L. Schmoldt; Tai-Hoon Cho; Dongping Zhu; Richard W. Conners; D. Earl Kline
1992-01-01
Machine vision and automated processing systems are under development at Virginia Tech University with support and cooperation from the USDA Forest Service. Our goals are to help U.S. hardwood producers automate, reduce costs, increase product volume and value recovery, and market higher value, more accurately graded and described products. Any vision system is...
Robust Spatial Autoregressive Modeling for Hardwood Log Inspection
Dongping Zhu; A.A. Beex
1994-01-01
We explore the application of a stochastic texture modeling method toward a machine vision system for log inspection in the forest products industry. This machine vision system uses computerized tomography (CT) imaging to locate and identify internal defects in hardwood logs. The application of CT to such industrial vision problems requires efficient and robust image...
Hong, Weizhe; Kennedy, Ann; Burgos-Artizzu, Xavier P; Zelikowsky, Moriel; Navonne, Santiago G; Perona, Pietro; Anderson, David J
2015-09-22
A lack of automated, quantitative, and accurate assessment of social behaviors in mammalian animal models has limited progress toward understanding mechanisms underlying social interactions and their disorders such as autism. Here we present a new integrated hardware and software system that combines video tracking, depth sensing, and machine learning for automatic detection and quantification of social behaviors involving close and dynamic interactions between two mice of different coat colors in their home cage. We designed a hardware setup that integrates traditional video cameras with a depth camera, developed computer vision tools to extract the body "pose" of individual animals in a social context, and used a supervised learning algorithm to classify several well-described social behaviors. We validated the robustness of the automated classifiers in various experimental settings and used them to examine how genetic background, such as that of Black and Tan Brachyury (BTBR) mice (a previously reported autism model), influences social behavior. Our integrated approach allows for rapid, automated measurement of social behaviors across diverse experimental designs and also affords the ability to develop new, objective behavioral metrics.
Hong, Weizhe; Kennedy, Ann; Burgos-Artizzu, Xavier P.; Zelikowsky, Moriel; Navonne, Santiago G.; Perona, Pietro; Anderson, David J.
2015-01-01
A lack of automated, quantitative, and accurate assessment of social behaviors in mammalian animal models has limited progress toward understanding mechanisms underlying social interactions and their disorders such as autism. Here we present a new integrated hardware and software system that combines video tracking, depth sensing, and machine learning for automatic detection and quantification of social behaviors involving close and dynamic interactions between two mice of different coat colors in their home cage. We designed a hardware setup that integrates traditional video cameras with a depth camera, developed computer vision tools to extract the body “pose” of individual animals in a social context, and used a supervised learning algorithm to classify several well-described social behaviors. We validated the robustness of the automated classifiers in various experimental settings and used them to examine how genetic background, such as that of Black and Tan Brachyury (BTBR) mice (a previously reported autism model), influences social behavior. Our integrated approach allows for rapid, automated measurement of social behaviors across diverse experimental designs and also affords the ability to develop new, objective behavioral metrics. PMID:26354123
Parallel Algorithms for Computer Vision
1990-04-01
NA86-1, Thinking Machines Corporation, Cambridge, MA, December 1986. [43] J. Little, G. Blelloch, and T. Cass. How to program the connection machine for... to program the connection machine for computer vision. In Proc. Workshop on Comp. Architecture for Pattern Analysis and Machine Intell., 1987. [92] J...In Proceedings of SPIE Conf. on Advances in Intelligent Robotics Systems, Bellingham, VA, 1987. SPIE. [91] J. Little, G. Blelloch, and T. Cass. How
Color image processing and vision system for an automated laser paint-stripping system
NASA Astrophysics Data System (ADS)
Hickey, John M., III; Hise, Lawson
1994-10-01
Color image processing in machine vision systems has not gained general acceptance. Most machine vision systems use images that are shades of gray. The Laser Automated Decoating System (LADS) required a vision system which could discriminate between substrates of various colors and textures and paints ranging from semi-gloss grays to high gloss red, white and blue (Air Force Thunderbirds). The changing lighting levels produced by the pulsed CO2 laser mandated a vision system that did not require a constant color temperature lighting for reliable image analysis.
Machine learning and computer vision approaches for phenotypic profiling.
Grys, Ben T; Lo, Dara S; Sahin, Nil; Kraus, Oren Z; Morris, Quaid; Boone, Charles; Andrews, Brenda J
2017-01-02
With recent advances in high-throughput, automated microscopy, there has been an increased demand for effective computational strategies to analyze large-scale, image-based data. To this end, computer vision approaches have been applied to cell segmentation and feature extraction, whereas machine-learning approaches have been developed to aid in phenotypic classification and clustering of data acquired from biological images. Here, we provide an overview of the commonly used computer vision and machine-learning methods for generating and categorizing phenotypic profiles, highlighting the general biological utility of each approach. © 2017 Grys et al.
Machine learning and computer vision approaches for phenotypic profiling
Morris, Quaid
2017-01-01
With recent advances in high-throughput, automated microscopy, there has been an increased demand for effective computational strategies to analyze large-scale, image-based data. To this end, computer vision approaches have been applied to cell segmentation and feature extraction, whereas machine-learning approaches have been developed to aid in phenotypic classification and clustering of data acquired from biological images. Here, we provide an overview of the commonly used computer vision and machine-learning methods for generating and categorizing phenotypic profiles, highlighting the general biological utility of each approach. PMID:27940887
Trends and developments in industrial machine vision: 2013
NASA Astrophysics Data System (ADS)
Niel, Kurt; Heinzl, Christoph
2014-03-01
When following current advancements and implementations in the field of machine vision there seems to be no borders for future developments: Calculating power constantly increases, and new ideas are spreading and previously challenging approaches are introduced in to mass market. Within the past decades these advances have had dramatic impacts on our lives. Consumer electronics, e.g. computers or telephones, which once occupied large volumes, now fit in the palm of a hand. To note just a few examples e.g. face recognition was adopted by the consumer market, 3D capturing became cheap, due to the huge community SW-coding got easier using sophisticated development platforms. However, still there is a remaining gap between consumer and industrial applications. While the first ones have to be entertaining, the second have to be reliable. Recent studies (e.g. VDMA [1], Germany) show a moderately increasing market for machine vision in industry. Asking industry regarding their needs the main challenges for industrial machine vision are simple usage and reliability for the process, quick support, full automation, self/easy adjustment at changing process parameters, "forget it in the line". Furthermore a big challenge is to support quality control: Nowadays the operator has to accurately define the tested features for checking the probes. There is an upcoming development also to let automated machine vision applications find out essential parameters in a more abstract level (top down). In this work we focus on three current and future topics for industrial machine vision: Metrology supporting automation, quality control (inline/atline/offline) as well as visualization and analysis of datasets with steadily growing sizes. Finally the general trend of the pixel orientated towards object orientated evaluation is addressed. We do not directly address the field of robotics taking advances from machine vision. This is actually a fast changing area which is worth an own contribution.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-22
...''); Amistar Automation, Inc. (``Amistar'') of San Marcos, California; Techno Soft Systemnics, Inc. (``Techno..., the ALJ's construction of the claim terms ``test,'' ``match score surface,'' and ``gradient direction...
Quinn, Mark Kenneth; Spinosa, Emanuele; Roberts, David A
2017-07-25
Measurements of pressure-sensitive paint (PSP) have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access.
Spinosa, Emanuele; Roberts, David A.
2017-01-01
Measurements of pressure-sensitive paint (PSP) have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access. PMID:28757553
Recent advances in the development and transfer of machine vision technologies for space
NASA Technical Reports Server (NTRS)
Defigueiredo, Rui J. P.; Pendleton, Thomas
1991-01-01
Recent work concerned with real-time machine vision is briefly reviewed. This work includes methodologies and techniques for optimal illumination, shape-from-shading of general (non-Lambertian) 3D surfaces, laser vision devices and technology, high level vision, sensor fusion, real-time computing, artificial neural network design and use, and motion estimation. Two new methods that are currently being developed for object recognition in clutter and for 3D attitude tracking based on line correspondence are discussed.
How virtual reality works: illusions of vision in "real" and virtual environments
NASA Astrophysics Data System (ADS)
Stark, Lawrence W.
1995-04-01
Visual illusions abound in normal vision--illusions of clarity and completeness, of continuity in time and space, of presence and vivacity--and are part and parcel of the visual world inwhich we live. These illusions are discussed in terms of the human visual system, with its high- resolution fovea, moved from point to point in the visual scene by rapid saccadic eye movements (EMs). This sampling of visual information is supplemented by a low-resolution, wide peripheral field of view, especially sensitive to motion. Cognitive-spatial models controlling perception, imagery, and 'seeing,' also control the EMs that shift the fovea in the Scanpath mode. These illusions provide for presence, the sense off being within an environment. They equally well lead to 'Telepresence,' the sense of being within a virtual display, especially if the operator is intensely interacting within an eye-hand and head-eye human-machine interface that provides for congruent visual and motor frames of reference. Interaction, immersion, and interest compel telepresence; intuitive functioning and engineered information flows can optimize human adaptation to the artificial new world of virtual reality, as virtual reality expands into entertainment, simulation, telerobotics, and scientific visualization and other professional work.
Cameras Reveal Elements in the Short Wave Infrared
NASA Technical Reports Server (NTRS)
2010-01-01
Goodrich ISR Systems Inc. (formerly Sensors Unlimited Inc.), based out of Princeton, New Jersey, received Small Business Innovation Research (SBIR) contracts from the Jet Propulsion Laboratory, Marshall Space Flight Center, Kennedy Space Center, Goddard Space Flight Center, Ames Research Center, Stennis Space Center, and Langley Research Center to assist in advancing and refining indium gallium arsenide imaging technology. Used on the Lunar Crater Observation and Sensing Satellite (LCROSS) mission in 2009 for imaging the short wave infrared wavelengths, the technology has dozens of applications in military, security and surveillance, machine vision, medical, spectroscopy, semiconductor inspection, instrumentation, thermography, and telecommunications.
Development of Moire machine vision
NASA Technical Reports Server (NTRS)
Harding, Kevin G.
1987-01-01
Three dimensional perception is essential to the development of versatile robotics systems in order to handle complex manufacturing tasks in future factories and in providing high accuracy measurements needed in flexible manufacturing and quality control. A program is described which will develop the potential of Moire techniques to provide this capability in vision systems and automated measurements, and demonstrate artificial intelligence (AI) techniques to take advantage of the strengths of Moire sensing. Moire techniques provide a means of optically manipulating the complex visual data in a three dimensional scene into a form which can be easily and quickly analyzed by computers. This type of optical data manipulation provides high productivity through integrated automation, producing a high quality product while reducing computer and mechanical manipulation requirements and thereby the cost and time of production. This nondestructive evaluation is developed to be able to make full field range measurement and three dimensional scene analysis.
Development of Moire machine vision
NASA Astrophysics Data System (ADS)
Harding, Kevin G.
1987-10-01
Three dimensional perception is essential to the development of versatile robotics systems in order to handle complex manufacturing tasks in future factories and in providing high accuracy measurements needed in flexible manufacturing and quality control. A program is described which will develop the potential of Moire techniques to provide this capability in vision systems and automated measurements, and demonstrate artificial intelligence (AI) techniques to take advantage of the strengths of Moire sensing. Moire techniques provide a means of optically manipulating the complex visual data in a three dimensional scene into a form which can be easily and quickly analyzed by computers. This type of optical data manipulation provides high productivity through integrated automation, producing a high quality product while reducing computer and mechanical manipulation requirements and thereby the cost and time of production. This nondestructive evaluation is developed to be able to make full field range measurement and three dimensional scene analysis.
Machine Vision For Industrial Control:The Unsung Opportunity
NASA Astrophysics Data System (ADS)
Falkman, Gerald A.; Murray, Lawrence A.; Cooper, James E.
1984-05-01
Vision modules have primarily been developed to relieve those pressures newly brought into existence by Inspection (QUALITY) and Robotic (PRODUCTIVITY) mandates. Industrial Control pressure stems on the other hand from the older first industrial revolution mandate of throughput. Satisfying such pressure calls for speed in both imaging and decision making. Vision companies have, however, put speed on a backburner or ignore it entirely because most modules are computer/software based which limits their speed potential. Increasingly, the keynote being struck at machine vision seminars is that "Visual and Computational Speed Must Be Increased and Dramatically!" There are modular hardwired-logic systems that are fast but, all too often, they are not very bright. Such units: Measure the fill factor of bottles as they spin by, Read labels on cans, Count stacked plastic cups or Monitor the width of parts streaming past the camera. Many are only a bit more complex than a photodetector. Once in place, most of these units are incapable of simple upgrading to a new task and are Vision's analog to the robot industry's pick and place (RIA TYPE E) robot. Vision thus finds itself amidst the same quandries that once beset the Robot Industry of America when it tried to define a robot, excluded dumb ones, and was left with only slow machines whose unit volume potential is shatteringly low. This paper develops an approach to meeting the need of a vision system that cuts a swath into the terra incognita of intelligent, high-speed vision processing. Main attention is directed to vision for industrial control. Some presently untapped vision application areas that will be serviced include: Electronics, Food, Sports, Pharmaceuticals, Machine Tools and Arc Welding.
Machine vision system for online inspection of freshly slaughtered chickens
USDA-ARS?s Scientific Manuscript database
A machine vision system was developed and evaluated for the automation of online inspection to differentiate freshly slaughtered wholesome chickens from systemically diseased chickens. The system consisted of an electron-multiplying charge-coupled-device camera used with an imaging spectrograph and ...
A self-learning camera for the validation of highly variable and pseudorandom patterns
NASA Astrophysics Data System (ADS)
Kelley, Michael
2004-05-01
Reliable and productive manufacturing operations have depended on people to quickly detect and solve problems whenever they appear. Over the last 20 years, more and more manufacturing operations have embraced machine vision systems to increase productivity, reliability and cost-effectiveness, including reducing the number of human operators required. Although machine vision technology has long been capable of solving simple problems, it has still not been broadly implemented. The reason is that until now, no machine vision system has been designed to meet the unique demands of complicated pattern recognition. The ZiCAM family was specifically developed to be the first practical hardware to meet these needs. To be able to address non-traditional applications, the machine vision industry must include smart camera technology that meets its users" demands for lower costs, better performance and the ability to address applications of irregular lighting, patterns and color. The next-generation smart cameras will need to evolve as a fundamentally different kind of sensor, with new technology that behaves like a human but performs like a computer. Neural network based systems, coupled with self-taught, n-space, non-linear modeling, promises to be the enabler of the next generation of machine vision equipment. Image processing technology is now available that enables a system to match an operator"s subjectivity. A Zero-Instruction-Set-Computer (ZISC) powered smart camera allows high-speed fuzzy-logic processing, without the need for computer programming. This can address applications of validating highly variable and pseudo-random patterns. A hardware-based implementation of a neural network, Zero-Instruction-Set-Computer, enables a vision system to "think" and "inspect" like a human, with the speed and reliability of a machine.
Knowledge-based machine vision systems for space station automation
NASA Technical Reports Server (NTRS)
Ranganath, Heggere S.; Chipman, Laure J.
1989-01-01
Computer vision techniques which have the potential for use on the space station and related applications are assessed. A knowledge-based vision system (expert vision system) and the development of a demonstration system for it are described. This system implements some of the capabilities that would be necessary in a machine vision system for the robot arm of the laboratory module in the space station. A Perceptics 9200e image processor, on a host VAXstation, was used to develop the demonstration system. In order to use realistic test images, photographs of actual space shuttle simulator panels were used. The system's capabilities of scene identification and scene matching are discussed.
Operator-coached machine vision for space telerobotics
NASA Technical Reports Server (NTRS)
Bon, Bruce; Wilcox, Brian; Litwin, Todd; Gennery, Donald B.
1991-01-01
A prototype system for interactive object modeling has been developed and tested. The goal of this effort has been to create a system which would demonstrate the feasibility of high interactive operator-coached machine vision in a realistic task environment, and to provide a testbed for experimentation with various modes of operator interaction. The purpose for such a system is to use human perception where machine vision is difficult, i.e., to segment the scene into objects and to designate their features, and to use machine vision to overcome limitations of human perception, i.e., for accurate measurement of object geometry. The system captures and displays video images from a number of cameras, allows the operator to designate a polyhedral object one edge at a time by moving a 3-D cursor within these images, performs a least-squares fit of the designated edges to edge data detected with a modified Sobel operator, and combines the edges thus detected to form a wire-frame object model that matches the Sobel data.
Implementation of compressive sensing for preclinical cine-MRI
NASA Astrophysics Data System (ADS)
Tan, Elliot; Yang, Ming; Ma, Lixin; Zheng, Yahong Rosa
2014-03-01
This paper presents a practical implementation of Compressive Sensing (CS) for a preclinical MRI machine to acquire randomly undersampled k-space data in cardiac function imaging applications. First, random undersampling masks were generated based on Gaussian, Cauchy, wrapped Cauchy and von Mises probability distribution functions by the inverse transform method. The best masks for undersampling ratios of 0.3, 0.4 and 0.5 were chosen for animal experimentation, and were programmed into a Bruker Avance III BioSpec 7.0T MRI system through method programming in ParaVision. Three undersampled mouse heart datasets were obtained using a fast low angle shot (FLASH) sequence, along with a control undersampled phantom dataset. ECG and respiratory gating was used to obtain high quality images. After CS reconstructions were applied to all acquired data, resulting images were quantitatively analyzed using the performance metrics of reconstruction error and Structural Similarity Index (SSIM). The comparative analysis indicated that CS reconstructed images from MRI machine undersampled data were indeed comparable to CS reconstructed images from retrospective undersampled data, and that CS techniques are practical in a preclinical setting. The implementation achieved 2 to 4 times acceleration for image acquisition and satisfactory quality of image reconstruction.
NASA Astrophysics Data System (ADS)
Razdan, Vikram; Bateman, Richard
2015-05-01
This study investigates the use of a Smartphone and its camera vision capabilities in Engineering metrology and flaw detection, with a view to develop a low cost alternative to Machine vision systems which are out of range for small scale manufacturers. A Smartphone has to provide a similar level of accuracy as Machine Vision devices like Smart cameras. The objective set out was to develop an App on an Android Smartphone, incorporating advanced Computer vision algorithms written in java code. The App could then be used for recording measurements of Twist Drill bits and hole geometry, and analysing the results for accuracy. A detailed literature review was carried out for in-depth study of Machine vision systems and their capabilities, including a comparison between the HTC One X Android Smartphone and the Teledyne Dalsa BOA Smart camera. A review of the existing metrology Apps in the market was also undertaken. In addition, the drilling operation was evaluated to establish key measurement parameters of a twist Drill bit, especially flank wear and diameter. The methodology covers software development of the Android App, including the use of image processing algorithms like Gaussian Blur, Sobel and Canny available from OpenCV software library, as well as designing and developing the experimental set-up for carrying out the measurements. The results obtained from the experimental set-up were analysed for geometry of Twist Drill bits and holes, including diametrical measurements and flaw detection. The results show that Smartphones like the HTC One X have the processing power and the camera capability to carry out metrological tasks, although dimensional accuracy achievable from the Smartphone App is below the level provided by Machine vision devices like Smart cameras. A Smartphone with mechanical attachments, capable of image processing and having a reasonable level of accuracy in dimensional measurement, has the potential to become a handy low-cost Machine vision system for small scale manufacturers, especially in field metrology and flaw detection.
SAD-Based Stereo Vision Machine on a System-on-Programmable-Chip (SoPC)
Zhang, Xiang; Chen, Zhangwei
2013-01-01
This paper, proposes a novel solution for a stereo vision machine based on the System-on-Programmable-Chip (SoPC) architecture. The SOPC technology provides great convenience for accessing many hardware devices such as DDRII, SSRAM, Flash, etc., by IP reuse. The system hardware is implemented in a single FPGA chip involving a 32-bit Nios II microprocessor, which is a configurable soft IP core in charge of managing the image buffer and users' configuration data. The Sum of Absolute Differences (SAD) algorithm is used for dense disparity map computation. The circuits of the algorithmic module are modeled by the Matlab-based DSP Builder. With a set of configuration interfaces, the machine can process many different sizes of stereo pair images. The maximum image size is up to 512 K pixels. This machine is designed to focus on real time stereo vision applications. The stereo vision machine offers good performance and high efficiency in real time. Considering a hardware FPGA clock of 90 MHz, 23 frames of 640 × 480 disparity maps can be obtained in one second with 5 × 5 matching window and maximum 64 disparity pixels. PMID:23459385
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-30
... Automation, Inc. (``Amistar'') of San Marcos, California; Techno Soft Systemnics, Inc. (``Techno Soft'') of... the claim terms ``test,'' ``match score surface,'' and ``gradient direction,'' all of his infringement... complainants' proposed construction for the claim terms ``test,'' ``match score surface,'' and ``gradient...
Aki, Esra; Atasavun, Songül; Kayihan, Holya
2008-06-01
Kinesthetic sense plays an important role in writing. Children with low vision lack sensory input from the environment given their loss of vision. This study assessed the effect of upper extremity kinesthetic sense on writing function in two groups, one of students with low vision (9 girls and 11 boys, 9.4 +/- 1.9 yr. of age) and one of sighted students (10 girls and 10 boys, 10.1 +/- 1.3 yr. of age). All participants were given the Kinesthesia Test and Jebsen Hand Function Test-Writing subtest. Students with low vision scored lower on kinesthetic perception and writing performance than sighted peers. The correlation between scores for writing performance and upper extremity kinesthetic sense in the two groups was significant (r = -.34). The probability of deficiencies in kinesthetic information in students with low vision must be remembered.
The precision measurement and assembly for miniature parts based on double machine vision systems
NASA Astrophysics Data System (ADS)
Wang, X. D.; Zhang, L. F.; Xin, M. Z.; Qu, Y. Q.; Luo, Y.; Ma, T. M.; Chen, L.
2015-02-01
In the process of miniature parts' assembly, the structural features on the bottom or side of the parts often need to be aligned and positioned. The general assembly equipment integrated with one vertical downward machine vision system cannot satisfy the requirement. A precision automatic assembly equipment was developed with double machine vision systems integrated. In the system, a horizontal vision system is employed to measure the position of the feature structure at the parts' side view, which cannot be seen with the vertical one. The position measured by horizontal camera is converted to the vertical vision system with the calibration information. By careful calibration, the parts' alignment and positioning in the assembly process can be guaranteed. The developed assembly equipment has the characteristics of easy implementation, modularization and high cost performance. The handling of the miniature parts and assembly procedure were briefly introduced. The calibration procedure was given and the assembly error was analyzed for compensation.
Dominguez Veiga, Jose Juan; O'Reilly, Martin; Whelan, Darragh; Caulfield, Brian; Ward, Tomas E
2017-08-04
Inertial sensors are one of the most commonly used sources of data for human activity recognition (HAR) and exercise detection (ED) tasks. The time series produced by these sensors are generally analyzed through numerical methods. Machine learning techniques such as random forests or support vector machines are popular in this field for classification efforts, but they need to be supported through the isolation of a potentially large number of additionally crafted features derived from the raw data. This feature preprocessing step can involve nontrivial digital signal processing (DSP) techniques. However, in many cases, the researchers interested in this type of activity recognition problems do not possess the necessary technical background for this feature-set development. The study aimed to present a novel application of established machine vision methods to provide interested researchers with an easier entry path into the HAR and ED fields. This can be achieved by removing the need for deep DSP skills through the use of transfer learning. This can be done by using a pretrained convolutional neural network (CNN) developed for machine vision purposes for exercise classification effort. The new method should simply require researchers to generate plots of the signals that they would like to build classifiers with, store them as images, and then place them in folders according to their training label before retraining the network. We applied a CNN, an established machine vision technique, to the task of ED. Tensorflow, a high-level framework for machine learning, was used to facilitate infrastructure needs. Simple time series plots generated directly from accelerometer and gyroscope signals are used to retrain an openly available neural network (Inception), originally developed for machine vision tasks. Data from 82 healthy volunteers, performing 5 different exercises while wearing a lumbar-worn inertial measurement unit (IMU), was collected. The ability of the proposed method to automatically classify the exercise being completed was assessed using this dataset. For comparative purposes, classification using the same dataset was also performed using the more conventional approach of feature-extraction and classification using random forest classifiers. With the collected dataset and the proposed method, the different exercises could be recognized with a 95.89% (3827/3991) accuracy, which is competitive with current state-of-the-art techniques in ED. The high level of accuracy attained with the proposed approach indicates that the waveform morphologies in the time-series plots for each of the exercises is sufficiently distinct among the participants to allow the use of machine vision approaches. The use of high-level machine learning frameworks, coupled with the novel use of machine vision techniques instead of complex manually crafted features, may facilitate access to research in the HAR field for individuals without extensive digital signal processing or machine learning backgrounds. ©Jose Juan Dominguez Veiga, Martin O'Reilly, Darragh Whelan, Brian Caulfield, Tomas E Ward. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 04.08.2017.
O'Reilly, Martin; Whelan, Darragh; Caulfield, Brian; Ward, Tomas E
2017-01-01
Background Inertial sensors are one of the most commonly used sources of data for human activity recognition (HAR) and exercise detection (ED) tasks. The time series produced by these sensors are generally analyzed through numerical methods. Machine learning techniques such as random forests or support vector machines are popular in this field for classification efforts, but they need to be supported through the isolation of a potentially large number of additionally crafted features derived from the raw data. This feature preprocessing step can involve nontrivial digital signal processing (DSP) techniques. However, in many cases, the researchers interested in this type of activity recognition problems do not possess the necessary technical background for this feature-set development. Objective The study aimed to present a novel application of established machine vision methods to provide interested researchers with an easier entry path into the HAR and ED fields. This can be achieved by removing the need for deep DSP skills through the use of transfer learning. This can be done by using a pretrained convolutional neural network (CNN) developed for machine vision purposes for exercise classification effort. The new method should simply require researchers to generate plots of the signals that they would like to build classifiers with, store them as images, and then place them in folders according to their training label before retraining the network. Methods We applied a CNN, an established machine vision technique, to the task of ED. Tensorflow, a high-level framework for machine learning, was used to facilitate infrastructure needs. Simple time series plots generated directly from accelerometer and gyroscope signals are used to retrain an openly available neural network (Inception), originally developed for machine vision tasks. Data from 82 healthy volunteers, performing 5 different exercises while wearing a lumbar-worn inertial measurement unit (IMU), was collected. The ability of the proposed method to automatically classify the exercise being completed was assessed using this dataset. For comparative purposes, classification using the same dataset was also performed using the more conventional approach of feature-extraction and classification using random forest classifiers. Results With the collected dataset and the proposed method, the different exercises could be recognized with a 95.89% (3827/3991) accuracy, which is competitive with current state-of-the-art techniques in ED. Conclusions The high level of accuracy attained with the proposed approach indicates that the waveform morphologies in the time-series plots for each of the exercises is sufficiently distinct among the participants to allow the use of machine vision approaches. The use of high-level machine learning frameworks, coupled with the novel use of machine vision techniques instead of complex manually crafted features, may facilitate access to research in the HAR field for individuals without extensive digital signal processing or machine learning backgrounds. PMID:28778851
NASA Technical Reports Server (NTRS)
1972-01-01
A unified approach to computer vision and manipulation is developed which is called choreographic vision. In the model, objects to be viewed by a projected robot in the Viking missions to Mars are seen as objects to be manipulated within choreographic contexts controlled by a multimoded remote, supervisory control system on Earth. A new theory of context relations is introduced as a basis for choreographic programming languages. A topological vision model is developed for recognizing objects by shape and contour. This model is integrated with a projected vision system consisting of a multiaperture image dissector TV camera and a ranging laser system. System program specifications integrate eye-hand coordination and topological vision functions and an aerospace multiprocessor implementation is described.
Robot path planning using expert systems and machine vision
NASA Astrophysics Data System (ADS)
Malone, Denis E.; Friedrich, Werner E.
1992-02-01
This paper describes a system developed for the robotic processing of naturally variable products. In order to plan the robot motion path it was necessary to use a sensor system, in this case a machine vision system, to observe the variations occurring in workpieces and interpret this with a knowledge based expert system. The knowledge base was acquired by carrying out an in-depth study of the product using examination procedures not available in the robotic workplace and relates the nature of the required path to the information obtainable from the machine vision system. The practical application of this system to the processing of fish fillets is described and used to illustrate the techniques.
Vision-based algorithms for near-host object detection and multilane sensing
NASA Astrophysics Data System (ADS)
Kenue, Surender K.
1995-01-01
Vision-based sensing can be used for lane sensing, adaptive cruise control, collision warning, and driver performance monitoring functions of intelligent vehicles. Current computer vision algorithms are not robust for handling multiple vehicles in highway scenarios. Several new algorithms are proposed for multi-lane sensing, near-host object detection, vehicle cut-in situations, and specifying regions of interest for object tracking. These algorithms were tested successfully on more than 6000 images taken from real-highway scenes under different daytime lighting conditions.
Automatic Dissection Of Plantlets
NASA Astrophysics Data System (ADS)
Batchelor, B. G.; Harris, I. P.; Marchant, J. A.; Tillett, R. D.
1989-03-01
Micropropagation is a technique used in horticulture for generating a monoclonal colony of plants. A tiny plantlet is cut into several parts, each of which is then replanted. At the moment, the cutting is performed manually. Automating this task would have significant economic benefits. A robot designed to dissect plants would need to be equipped with intelligent visual sensing. This article is concerned with the image acquisition and processing techniques which such a machine might use. A program, which can calculate where to cut a plant with an "open" structure, is presented. This is expressed in the ProVision language, which is described in another article presented at this conference. (Article 1002-65)
Lumber Scanning System for Surface Defect Detection
D. Earl Kline; Y. Jason Hou; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman
1992-01-01
This paper describes research aimed at developing a machine vision technology to drive automated processes in the hardwood forest products manufacturing industry. An industrial-scale machine vision system has been designed to scan variable-size hardwood lumber for detecting important features that influence the grade and value of lumber such as knots, holes, wane,...
Liquid lens: advances in adaptive optics
NASA Astrophysics Data System (ADS)
Casey, Shawn Patrick
2010-12-01
'Liquid lens' technologies promise significant advancements in machine vision and optical communications systems. Adaptations for machine vision, human vision correction, and optical communications are used to exemplify the versatile nature of this technology. Utilization of liquid lens elements allows the cost effective implementation of optical velocity measurement. The project consists of a custom image processor, camera, and interface. The images are passed into customized pattern recognition and optical character recognition algorithms. A single camera would be used for both speed detection and object recognition.
Wu, Dung-Sheng
2018-01-01
Spark-assisted chemical engraving (SACE) is a non-traditional machining technology that is used to machine electrically non-conducting materials including glass, ceramics, and quartz. The processing accuracy, machining efficiency, and reproducibility are the key factors in the SACE process. In the present study, a machine vision method is applied to monitor and estimate the status of a SACE-drilled hole in quartz glass. During the machining of quartz glass, the spring-fed tool electrode was pre-pressured on the quartz glass surface to feed the electrode that was in contact with the machining surface of the quartz glass. In situ image acquisition and analysis of the SACE drilling processes were used to analyze the captured image of the state of the spark discharge at the tip and sidewall of the electrode. The results indicated an association between the accumulative size of the SACE-induced spark area and deepness of the hole. The results indicated that the evaluated depths of the SACE-machined holes were a proportional function of the accumulative spark size with a high degree of correlation. The study proposes an innovative computer vision-based method to estimate the deepness and status of SACE-drilled holes in real time. PMID:29565303
Ho, Chao-Ching; Wu, Dung-Sheng
2018-03-22
Spark-assisted chemical engraving (SACE) is a non-traditional machining technology that is used to machine electrically non-conducting materials including glass, ceramics, and quartz. The processing accuracy, machining efficiency, and reproducibility are the key factors in the SACE process. In the present study, a machine vision method is applied to monitor and estimate the status of a SACE-drilled hole in quartz glass. During the machining of quartz glass, the spring-fed tool electrode was pre-pressured on the quartz glass surface to feed the electrode that was in contact with the machining surface of the quartz glass. In situ image acquisition and analysis of the SACE drilling processes were used to analyze the captured image of the state of the spark discharge at the tip and sidewall of the electrode. The results indicated an association between the accumulative size of the SACE-induced spark area and deepness of the hole. The results indicated that the evaluated depths of the SACE-machined holes were a proportional function of the accumulative spark size with a high degree of correlation. The study proposes an innovative computer vision-based method to estimate the deepness and status of SACE-drilled holes in real time.
NASA Astrophysics Data System (ADS)
Lizotte, Todd E.; Ohar, Orest
2004-02-01
Illuminators used in machine vision applications typically produce non-uniform illumination onto the targeted surface being observed, causing a variety of problems with machine vision alignment or measurement. In most circumstances the light source is broad spectrum, leading to further problems with image quality when viewed through a CCD camera. Configured with a simple light bulb and a mirrored reflector and/or frosted glass plates, these general illuminators are appropriate for only macro applications. Over the last 5 years newer illuminators have hit the market including circular or rectangular arrays of high intensity light emitting diodes. These diode arrays are used to create monochromatic flood illumination of a surface that is to be inspected. The problem with these illumination techniques is that most of the light does not illuminate the desired areas, but broadly spreads across the surface, or when integrated with diffuser elements, tend to create similar shadowing effects to the broad spectrum light sources. In many cases a user will try to increase the performance of these illuminators by adding several of these assemblies together, increasing the intensity or by moving the illumination source closer or farther from the surface being inspected. In this case these non-uniform techniques can lead to machine vision errors, where the computer machine vision may read false information, such as interpreting non-uniform lighting or shadowing effects as defects. This paper will cover a technique involving the use of holographic / diffractive hybrid optical elements that are integrated into standard and customized light sources used in the machine vision industry. The bulk of the paper will describe the function and fabrication of the holographic/diffractive optics and how they can be tailored to improve illuminator design. Further information will be provided a specific design and examples of it in operation will be disclosed.
Diamond, James; Anderson, Neil H; Bartels, Peter H; Montironi, Rodolfo; Hamilton, Peter W
2004-09-01
Quantitative examination of prostate histology offers clues in the diagnostic classification of lesions and in the prediction of response to treatment and prognosis. To facilitate the collection of quantitative data, the development of machine vision systems is necessary. This study explored the use of imaging for identifying tissue abnormalities in prostate histology. Medium-power histological scenes were recorded from whole-mount radical prostatectomy sections at x 40 objective magnification and assessed by a pathologist as exhibiting stroma, normal tissue (nonneoplastic epithelial component), or prostatic carcinoma (PCa). A machine vision system was developed that divided the scenes into subregions of 100 x 100 pixels and subjected each to image-processing techniques. Analysis of morphological characteristics allowed the identification of normal tissue. Analysis of image texture demonstrated that Haralick feature 4 was the most suitable for discriminating stroma from PCa. Using these morphological and texture measurements, it was possible to define a classification scheme for each subregion. The machine vision system is designed to integrate these classification rules and generate digital maps of tissue composition from the classification of subregions; 79.3% of subregions were correctly classified. Established classification rates have demonstrated the validity of the methodology on small scenes; a logical extension was to apply the methodology to whole slide images via scanning technology. The machine vision system is capable of classifying these images. The machine vision system developed in this project facilitates the exploration of morphological and texture characteristics in quantifying tissue composition. It also illustrates the potential of quantitative methods to provide highly discriminatory information in the automated identification of prostatic lesions using computer vision.
NASA Technical Reports Server (NTRS)
1999-01-01
Amherst Systems manufactures foveal machine vision technology and systems commercially available to end-users and system integrators. This technology was initially developed under NASA contracts NAS9-19335 (Johnson Space Center) and NAS1-20841 (Langley Research Center). This technology is currently being delivered to university research facilities and military sites. More information may be found in www.amherst.com.
Machine vision for various manipulation tasks
NASA Astrophysics Data System (ADS)
Domae, Yukiyasu
2017-03-01
Bin-picking, re-grasping, pick-and-place, kitting, etc. There are many manipulation tasks in the fields of automation of factory, warehouse and so on. The main problem of the automation is that the target objects (items/parts) have various shapes, weights and surface materials. In my talk, I will show latest machine vision systems and algorithms against the problem.
A Multiple Sensor Machine Vision System for Automatic Hardwood Feature Detection
D. Earl Kline; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman; Robert L. Brisbin
1993-01-01
A multiple sensor machine vision prototype is being developed to scan full size hardwood lumber at industrial speeds for automatically detecting features such as knots holes, wane, stain, splits, checks, and color. The prototype integrates a multiple sensor imaging system, a materials handling system, a computer system, and application software. The prototype provides...
Machine Vision Technology for the Forest Products Industry
Richard W. Conners; D.Earl Kline; Philip A. Araman; Thomas T. Drayer
1997-01-01
From forest to finished product, wood is moved from one processing stage to the next, subject to the decisions of individuals along the way. While this process has worked for hundreds of years, the technology exists today to provide more complete information to the decision makers. Virginia Tech has developed this technology, creating a machine vision prototype for...
Pelletier, Mathew G
2008-02-08
One of the main hurdles standing in the way of optimal cleaning of cotton lint isthe lack of sensing systems that can react fast enough to provide the control system withreal-time information as to the level of trash contamination of the cotton lint. This researchexamines the use of programmable graphic processing units (GPU) as an alternative to thePC's traditional use of the central processing unit (CPU). The use of the GPU, as analternative computation platform, allowed for the machine vision system to gain asignificant improvement in processing time. By improving the processing time, thisresearch seeks to address the lack of availability of rapid trash sensing systems and thusalleviate a situation in which the current systems view the cotton lint either well before, orafter, the cotton is cleaned. This extended lag/lead time that is currently imposed on thecotton trash cleaning control systems, is what is responsible for system operators utilizing avery large dead-band safety buffer in order to ensure that the cotton lint is not undercleaned.Unfortunately, the utilization of a large dead-band buffer results in the majority ofthe cotton lint being over-cleaned which in turn causes lint fiber-damage as well assignificant losses of the valuable lint due to the excessive use of cleaning machinery. Thisresearch estimates that upwards of a 30% reduction in lint loss could be gained through theuse of a tightly coupled trash sensor to the cleaning machinery control systems. Thisresearch seeks to improve processing times through the development of a new algorithm forcotton trash sensing that allows for implementation on a highly parallel architecture.Additionally, by moving the new parallel algorithm onto an alternative computing platform,the graphic processing unit "GPU", for processing of the cotton trash images, a speed up ofover 6.5 times, over optimized code running on the PC's central processing unit "CPU", wasgained. The new parallel algorithm operating on the GPU was able to process a 1024x1024image in less than 17ms. At this improved speed, the image processing system's performance should now be sufficient to provide a system that would be capable of realtimefeed-back control that is in tight cooperation with the cleaning equipment.
A Low-Power High-Speed Smart Sensor Design for Space Exploration Missions
NASA Technical Reports Server (NTRS)
Fang, Wai-Chi
1997-01-01
A low-power high-speed smart sensor system based on a large format active pixel sensor (APS) integrated with a programmable neural processor for space exploration missions is presented. The concept of building an advanced smart sensing system is demonstrated by a system-level microchip design that is composed with an APS sensor, a programmable neural processor, and an embedded microprocessor in a SOI CMOS technology. This ultra-fast smart sensor system-on-a-chip design mimics what is inherent in biological vision systems. Moreover, it is programmable and capable of performing ultra-fast machine vision processing in all levels such as image acquisition, image fusion, image analysis, scene interpretation, and control functions. The system provides about one tera-operation-per-second computing power which is a two order-of-magnitude increase over that of state-of-the-art microcomputers. Its high performance is due to massively parallel computing structures, high data throughput rates, fast learning capabilities, and advanced VLSI system-on-a-chip implementation.
The 3D model control of image processing
NASA Technical Reports Server (NTRS)
Nguyen, An H.; Stark, Lawrence
1989-01-01
Telerobotics studies remote control of distant robots by a human operator using supervisory or direct control. Even if the robot manipulators has vision or other senses, problems arise involving control, communications, and delay. The communication delays that may be expected with telerobots working in space stations while being controlled from an Earth lab have led to a number of experiments attempting to circumvent the problem. This delay in communication is a main motivating factor in moving from well understood instantaneous hands-on manual control to less well understood supervisory control; the ultimate step would be the realization of a fully autonomous robot. The 3-D model control plays a crucial role in resolving many conflicting image processing problems that are inherent in resolving in the bottom-up approach of most current machine vision processes. The 3-D model control approach is also capable of providing the necessary visual feedback information for both the control algorithms and for the human operator.
Receptive fields and the theory of discriminant operators
NASA Astrophysics Data System (ADS)
Gupta, Madan M.; Hungenahally, Suresh K.
1991-02-01
Biological basis for machine vision is a notion which is being used extensively for the development of machine vision systems for various applications. In this paper we have made an attempt to emulate the receptive fields that exist in the biological visual channels. In particular we have exploited the notion of receptive fields for developing the mathematical functions named as discriminantfunctions for the extraction of transition information from signals and multi-dimensional signals and images. These functions are found to be useful for the development of artificial receptive fields for neuro-vision systems. 1.
Deep Learning for Computer Vision: A Brief Review
Doulamis, Nikolaos; Doulamis, Anastasios; Protopapadakis, Eftychios
2018-01-01
Over the last years deep learning methods have been shown to outperform previous state-of-the-art machine learning techniques in several fields, with computer vision being one of the most prominent cases. This review paper provides a brief overview of some of the most significant deep learning schemes used in computer vision problems, that is, Convolutional Neural Networks, Deep Boltzmann Machines and Deep Belief Networks, and Stacked Denoising Autoencoders. A brief account of their history, structure, advantages, and limitations is given, followed by a description of their applications in various computer vision tasks, such as object detection, face recognition, action and activity recognition, and human pose estimation. Finally, a brief overview is given of future directions in designing deep learning schemes for computer vision problems and the challenges involved therein. PMID:29487619
Vision Systems with the Human in the Loop
NASA Astrophysics Data System (ADS)
Bauckhage, Christian; Hanheide, Marc; Wrede, Sebastian; Käster, Thomas; Pfeiffer, Michael; Sagerer, Gerhard
2005-12-01
The emerging cognitive vision paradigm deals with vision systems that apply machine learning and automatic reasoning in order to learn from what they perceive. Cognitive vision systems can rate the relevance and consistency of newly acquired knowledge, they can adapt to their environment and thus will exhibit high robustness. This contribution presents vision systems that aim at flexibility and robustness. One is tailored for content-based image retrieval, the others are cognitive vision systems that constitute prototypes of visual active memories which evaluate, gather, and integrate contextual knowledge for visual analysis. All three systems are designed to interact with human users. After we will have discussed adaptive content-based image retrieval and object and action recognition in an office environment, the issue of assessing cognitive systems will be raised. Experiences from psychologically evaluated human-machine interactions will be reported and the promising potential of psychologically-based usability experiments will be stressed.
Quality Control by Artificial Vision
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lam, Edmond Y.; Gleason, Shaun Scott; Niel, Kurt S.
2010-01-01
Computational technology has fundamentally changed many aspects of our lives. One clear evidence is the development of artificial-vision systems, which have effectively automated many manual tasks ranging from quality inspection to quantitative assessment. In many cases, these machine-vision systems are even preferred over manual ones due to their repeatability and high precision. Such advantages come from significant research efforts in advancing sensor technology, illumination, computational hardware, and image-processing algorithms. Similar to the Special Section on Quality Control by Artificial Vision published two years ago in Volume 17, Issue 3 of the Journal of Electronic Imaging, the present one invited papersmore » relevant to fundamental technology improvements to foster quality control by artificial vision, and fine-tuned the technology for specific applications. We aim to balance both theoretical and applied work pertinent to this special section theme. Consequently, we have seven high-quality papers resulting from the stringent peer-reviewing process in place at the Journal of Electronic Imaging. Some of the papers contain extended treatment of the authors work presented at the SPIE Image Processing: Machine Vision Applications conference and the International Conference on Quality Control by Artificial Vision. On the broad application side, Liu et al. propose an unsupervised texture image segmentation scheme. Using a multilayer data condensation spectral clustering algorithm together with wavelet transform, they demonstrate the effectiveness of their approach on both texture and synthetic aperture radar images. A problem related to image segmentation is image extraction. For this, O'Leary et al. investigate the theory of polynomial moments and show how these moments can be compared to classical filters. They also show how to use the discrete polynomial-basis functions for the extraction of 3-D embossed digits, demonstrating superiority over Fourier-basis functions for this task. Image registration is another important task for machine vision. Bingham and Arrowood investigate the implementation and results in applying Fourier phase matching for projection registration, with a particular focus on nondestructive testing using computed tomography. Readers interested in enriching their arsenal of image-processing algorithms for machine-vision tasks should find these papers enriching. Meanwhile, we have four papers dealing with more specific machine-vision tasks. The first one, Yahiaoui et al., is quantitative in nature, using machine vision for real-time passenger counting. Occulsion is a common problem in counting objects and people, and they circumvent this issue with a dense stereovision system, achieving 97 to 99% accuracy in their tests. On the other hand, the second paper by Oswald-Tranta et al. focuses on thermographic crack detection. An infrared camera is used to detect inhomogeneities, which may indicate surface cracks. They describe the various steps in developing fully automated testing equipment aimed at a high throughput. Another paper describing an inspection system is Molleda et al., which handles flatness inspection of rolled products. They employ optical-laser triangulation and 3-D surface reconstruction for this task, showing how these can be achieved in real time. Last but not least, Presles et al. propose a way to monitor the particle-size distribution of batch crystallization processes. This is achieved through a new in situ imaging probe and image-analysis methods. While it is unlikely any reader may be working on these four specific problems at the same time, we are confident that readers will find these papers inspiring and potentially helpful to their own machine-vision system developments.« less
Reflections on the Development of a Machine Vision Technology for the Forest Products
Richard W. Conners; D.Earl Kline; Philip A. Araman; Robert L. Brisbon
1992-01-01
The authors have approximately 25 years experience in developing machine vision technology for the forest products industry. Based on this experience this paper will attempt to realistically predict what the future holds for this technology. In particular, this paper will attempt to describe some of the benefits this technology will offer, describe how the technology...
USDA-ARS?s Scientific Manuscript database
The objective of this research was to develop an in-field apple presorting and grading system to separate undersized and defective fruit from fresh market-grade apples. To achieve this goal, a cost-effective machine vision inspection prototype was built, which consisted of a low-cost color camera, L...
Color machine vision in industrial process control: case limestone mine
NASA Astrophysics Data System (ADS)
Paernaenen, Pekka H. T.; Lemstrom, Guy F.; Koskinen, Seppo
1994-11-01
An optical sorter technology has been developed to improve profitability of a mine by using color line scan machine vision technology. The new technology adapted longers the expected life time of the limestone mine and improves its efficiency. Also the project has proved that color line scan technology of today can successfully be applied to industrial use in harsh environments.
LED light design method for high contrast and uniform illumination imaging in machine vision.
Wu, Xiaojun; Gao, Guangming
2018-03-01
In machine vision, illumination is very critical to determine the complexity of the inspection algorithms. Proper lights can obtain clear and sharp images with the highest contrast and low noise between the interested object and the background, which is conducive to the target being located, measured, or inspected. Contrary to the empirically based trial-and-error convention to select the off-the-shelf LED light in machine vision, an optimization algorithm for LED light design is proposed in this paper. It is composed of the contrast optimization modeling and the uniform illumination technology for non-normal incidence (UINI). The contrast optimization model is built based on the surface reflection characteristics, e.g., the roughness, the reflective index, and light direction, etc., to maximize the contrast between the features of interest and the background. The UINI can keep the uniformity of the optimized lighting by the contrast optimization model. The simulation and experimental results demonstrate that the optimization algorithm is effective and suitable to produce images with the highest contrast and uniformity, which is very inspirational to the design of LED illumination systems in machine vision.
Neo-Symbiosis: The Next Stage in the Evolution of Human Information Interaction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Griffith, Douglas; Greitzer, Frank L.
In his 1960 paper Man-Machine Symbiosis, Licklider predicted that human brains and computing machines will be coupled in a tight partnership that will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today. Today we are on the threshold of resurrecting the vision of symbiosis. While Licklider’s original vision suggested a co-equal relationship, here we discuss an updated vision, neo-symbiosis, in which the human holds a superordinate position in an intelligent human-computer collaborative environment. This paper was originally published as a journal article and is being publishedmore » as a chapter in an upcoming book series, Advances in Novel Approaches in Cognitive Informatics and Natural Intelligence.« less
Manifold learning in machine vision and robotics
NASA Astrophysics Data System (ADS)
Bernstein, Alexander
2017-02-01
Smart algorithms are used in Machine vision and Robotics to organize or extract high-level information from the available data. Nowadays, Machine learning is an essential and ubiquitous tool to automate extraction patterns or regularities from data (images in Machine vision; camera, laser, and sonar sensors data in Robotics) in order to solve various subject-oriented tasks such as understanding and classification of images content, navigation of mobile autonomous robot in uncertain environments, robot manipulation in medical robotics and computer-assisted surgery, and other. Usually such data have high dimensionality, however, due to various dependencies between their components and constraints caused by physical reasons, all "feasible and usable data" occupy only a very small part in high dimensional "observation space" with smaller intrinsic dimensionality. Generally accepted model of such data is manifold model in accordance with which the data lie on or near an unknown manifold (surface) of lower dimensionality embedded in an ambient high dimensional observation space; real-world high-dimensional data obtained from "natural" sources meet, as a rule, this model. The use of Manifold learning technique in Machine vision and Robotics, which discovers a low-dimensional structure of high dimensional data and results in effective algorithms for solving of a large number of various subject-oriented tasks, is the content of the conference plenary speech some topics of which are in the paper.
NASA Astrophysics Data System (ADS)
Mottershead, John E.
2015-05-01
MSSP is our journal. It developed out of the research community and in that sense is owned by us, its readers and authors. It was started by a small group, all international leaders in System Identification, Measurement and Signal Processing, Modal Analysis, Machine and Structural Diagnostics etc., several of whom still provide invaluable advice and guidance through their work on the Editorial Board. Most importantly, Simon's leadership for almost three decades has been inspirational, dedicated and energetic. So, it is a great honour for me to have been invited to assume the editorial leadership of MSSP and continue the work of serving a new generation of researchers in the broad and evolving field of Mechanical Systems and Signal Processing.
Machine Detection of Enhanced Electromechanical Energy Conversion in PbZr 0.2Ti 0.8O 3 Thin Films
Agar, Joshua C.; Cao, Ye; Naul, Brett; ...
2018-05-28
Many energy conversion, sensing, and microelectronic applications based on ferroic materials are determined by the domain structure evolution under applied stimuli. New hyperspectral, multidimensional spectroscopic techniques now probe dynamic responses at relevant length and time scales to provide an understanding of how these nanoscale domain structures impact macroscopic properties. Such approaches, however, remain limited in use because of the difficulties that exist in extracting and visualizing scientific insights from these complex datasets. Using multidimensional band-excitation scanning probe spectroscopy and adapting tools from both computer vision and machine learning, an automated workflow is developed to featurize, detect, and classify signatures ofmore » ferroelectric/ferroelastic switching processes in complex ferroelectric domain structures. This approach enables the identification and nanoscale visualization of varied modes of response and a pathway to statistically meaningful quantification of the differences between those modes. Lastly, among other things, the importance of domain geometry is spatially visualized for enhancing nanoscale electromechanical energy conversion.« less
Machine Detection of Enhanced Electromechanical Energy Conversion in PbZr 0.2Ti 0.8O 3 Thin Films
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agar, Joshua C.; Cao, Ye; Naul, Brett
Many energy conversion, sensing, and microelectronic applications based on ferroic materials are determined by the domain structure evolution under applied stimuli. New hyperspectral, multidimensional spectroscopic techniques now probe dynamic responses at relevant length and time scales to provide an understanding of how these nanoscale domain structures impact macroscopic properties. Such approaches, however, remain limited in use because of the difficulties that exist in extracting and visualizing scientific insights from these complex datasets. Using multidimensional band-excitation scanning probe spectroscopy and adapting tools from both computer vision and machine learning, an automated workflow is developed to featurize, detect, and classify signatures ofmore » ferroelectric/ferroelastic switching processes in complex ferroelectric domain structures. This approach enables the identification and nanoscale visualization of varied modes of response and a pathway to statistically meaningful quantification of the differences between those modes. Lastly, among other things, the importance of domain geometry is spatially visualized for enhancing nanoscale electromechanical energy conversion.« less
A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems
Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo
2017-01-01
Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems. PMID:28079187
A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems.
Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo
2017-01-12
Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.
A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems
NASA Astrophysics Data System (ADS)
Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo
2017-01-01
Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.
Performance of Color Camera Machine Vision in Automated Furniture Rough Mill Systems
D. Earl Kline; Agus Widoyoko; Janice K. Wiedenbeck; Philip A. Araman
1998-01-01
The objective of this study was to evaluate the performance of color camera machine vision for lumber processing in a furniture rough mill. The study used 134 red oak boards to compare the performance of automated gang-rip-first rough mill yield based on a prototype color camera lumber inspection system developed at Virginia Tech with both estimated optimum rough mill...
Overview of machine vision methods in x-ray imaging and microtomography
NASA Astrophysics Data System (ADS)
Buzmakov, Alexey; Zolotov, Denis; Chukalina, Marina; Nikolaev, Dmitry; Gladkov, Andrey; Ingacheva, Anastasia; Yakimchuk, Ivan; Asadchikov, Victor
2018-04-01
Digital X-ray imaging became widely used in science, medicine, non-destructive testing. This allows using modern digital images analysis for automatic information extraction and interpretation. We give short review of scientific applications of machine vision in scientific X-ray imaging and microtomography, including image processing, feature detection and extraction, images compression to increase camera throughput, microtomography reconstruction, visualization and setup adjustment.
Vision-aided Monitoring and Control of Thermal Spray, Spray Forming, and Welding Processes
NASA Technical Reports Server (NTRS)
Agapakis, John E.; Bolstad, Jon
1993-01-01
Vision is one of the most powerful forms of non-contact sensing for monitoring and control of manufacturing processes. However, processes involving an arc plasma or flame such as welding or thermal spraying pose particularly challenging problems to conventional vision sensing and processing techniques. The arc or plasma is not typically limited to a single spectral region and thus cannot be easily filtered out optically. This paper presents an innovative vision sensing system that uses intense stroboscopic illumination to overpower the arc light and produce a video image that is free of arc light or glare and dedicated image processing and analysis schemes that can enhance the video images or extract features of interest and produce quantitative process measures which can be used for process monitoring and control. Results of two SBIR programs sponsored by NASA and DOE and focusing on the application of this innovative vision sensing and processing technology to thermal spraying and welding process monitoring and control are discussed.
Software model of a machine vision system based on the common house fly.
Madsen, Robert; Barrett, Steven; Wilcox, Michael
2005-01-01
The vision system of the common house fly has many properties, such as hyperacuity and parallel structure, which would be advantageous in a machine vision system. A software model has been developed which is ultimately intended to be a tool to guide the design of an analog real time vision system. The model starts by laying out cartridges over an image. The cartridges are analogous to the ommatidium of the fly's eye and contain seven photoreceptors each with a Gaussian profile. The spacing between photoreceptors is variable providing for more or less detail as needed. The cartridges provide information on what type of features they see and neighboring cartridges share information to construct a feature map.
Component Pin Recognition Using Algorithms Based on Machine Learning
NASA Astrophysics Data System (ADS)
Xiao, Yang; Hu, Hong; Liu, Ze; Xu, Jiangchang
2018-04-01
The purpose of machine vision for a plug-in machine is to improve the machine’s stability and accuracy, and recognition of the component pin is an important part of the vision. This paper focuses on component pin recognition using three different techniques. The first technique involves traditional image processing using the core algorithm for binary large object (BLOB) analysis. The second technique uses the histogram of oriented gradients (HOG), to experimentally compare the effect of the support vector machine (SVM) and the adaptive boosting machine (AdaBoost) learning meta-algorithm classifiers. The third technique is the use of an in-depth learning method known as convolution neural network (CNN), which involves identifying the pin by comparing a sample to its training. The main purpose of the research presented in this paper is to increase the knowledge of learning methods used in the plug-in machine industry in order to achieve better results.
Machine vision based quality inspection of flat glass products
NASA Astrophysics Data System (ADS)
Zauner, G.; Schagerl, M.
2014-03-01
This application paper presents a machine vision solution for the quality inspection of flat glass products. A contact image sensor (CIS) is used to generate digital images of the glass surfaces. The presented machine vision based quality inspection at the end of the production line aims to classify five different glass defect types. The defect images are usually characterized by very little `image structure', i.e. homogeneous regions without distinct image texture. Additionally, these defect images usually consist of only a few pixels. At the same time the appearance of certain defect classes can be very diverse (e.g. water drops). We used simple state-of-the-art image features like histogram-based features (std. deviation, curtosis, skewness), geometric features (form factor/elongation, eccentricity, Hu-moments) and texture features (grey level run length matrix, co-occurrence matrix) to extract defect information. The main contribution of this work now lies in the systematic evaluation of various machine learning algorithms to identify appropriate classification approaches for this specific class of images. In this way, the following machine learning algorithms were compared: decision tree (J48), random forest, JRip rules, naive Bayes, Support Vector Machine (multi class), neural network (multilayer perceptron) and k-Nearest Neighbour. We used a representative image database of 2300 defect images and applied cross validation for evaluation purposes.
NASA Astrophysics Data System (ADS)
Li, C.; Li, F.; Liu, Y.; Li, X.; Liu, P.; Xiao, B.
2012-07-01
Building 3D reconstruction based on ground remote sensing data (image, video and lidar) inevitably faces the problem that buildings are always occluded by vegetation, so how to automatically remove and repair vegetation occlusion is a very important preprocessing work for image understanding, compute vision and digital photogrammetry. In the traditional multispectral remote sensing which is achieved by aeronautics and space platforms, the Red and Near-infrared (NIR) bands, such as NDVI (Normalized Difference Vegetation Index), are useful to distinguish vegetation and clouds, amongst other targets. However, especially in the ground platform, CIR (Color Infra Red) is little utilized by compute vision and digital photogrammetry which usually only take true color RBG into account. Therefore whether CIR is necessary for vegetation segmentation or not has significance in that most of close-range cameras don't contain such NIR band. Moreover, the CIE L*a*b color space, which transform from RGB, seems not of much interest by photogrammetrists despite its powerfulness in image classification and analysis. So, CIE (L, a, b) feature and support vector machine (SVM) is suggested for vegetation segmentation to substitute for CIR. Finally, experimental results of visual effect and automation are given. The conclusion is that it's feasible to remove and segment vegetation occlusion without NIR band. This work should pave the way for texture reconstruction and repair for future 3D reconstruction.
Research into the Architecture of CAD Based Robot Vision Systems
1988-02-09
Vision and "Automatic Generation of Recognition Features for Com- puter Vision," Mudge, Turney and Volz, published in Robotica (1987). All of the...Occluded Parts," (T.N. Mudge, J.L. Turney, and R.A. Volz), Robotica , vol. 5, 1987, pp. 117-127. 5. "Vision Algorithms for Hypercube Machines," (T.N. Mudge
NASA Astrophysics Data System (ADS)
Altıparmak, Hamit; Al Shahadat, Mohamad; Kiani, Ehsan; Dimililer, Kamil
2018-04-01
Robotic agriculture requires smart and doable techniques to substitute the human intelligence with machine intelligence. Strawberry is one of the important Mediterranean product and its productivity enhancement requires modern and machine-based methods. Whereas a human identifies the disease infected leaves by his eye, the machine should also be capable of vision-based disease identification. The objective of this paper is to practically verify the applicability of a new computer-vision method for discrimination between the healthy and disease infected strawberry leaves which does not require neural network or time consuming trainings. The proposed method was tested under outdoor lighting condition using a regular DLSR camera without any particular lens. Since the type and infection degree of disease is approximated a human brain a fuzzy decision maker classifies the leaves over the images captured on-site having the same properties of human vision. Optimizing the fuzzy parameters for a typical strawberry production area at a summer mid-day in Cyprus produced 96% accuracy for segmented iron deficiency and 93% accuracy for segmented using a typical human instant classification approximation as the benchmark holding higher accuracy than a human eye identifier. The fuzzy-base classifier provides approximate result for decision making on the leaf status as if it is healthy or not.
NASA Technical Reports Server (NTRS)
Jobson, Daniel J.
1990-01-01
The visual perception of form information is considered to be based on the functioning of simple and complex neurons in the primate striate cortex. However, a review of the physiological data on these brain cells cannot be harmonized with either the perceptual spatial frequency performance of primates or the performance which is necessary for form perception in humans. This discrepancy together with recent interest in cortical-like and perceptual-like processing in image coding and machine vision prompted a series of image processing experiments intended to provide some definition of the selection of image operators. The experiments were aimed at determining operators which could be used to detect edges in a computational manner consistent with the visual perception of structure in images. Fundamental issues were the selection of size (peak spatial frequency) and circular versus oriented operators (or some combination). In a previous study, circular difference-of-Gaussian (DOG) operators, with peak spatial frequency responses at about 11 and 33 cyc/deg were found to capture the primary structural information in images. Here larger scale circular DOG operators were explored and led to severe loss of image structure and introduced spatial dislocations (due to blur) in structure which is not consistent with visual perception. Orientation sensitive operators (akin to one class of simple cortical neurons) introduced ambiguities of edge extent regardless of the scale of the operator. For machine vision schemes which are functionally similar to natural vision form perception, two circularly symmetric very high spatial frequency channels appear to be necessary and sufficient for a wide range of natural images. Such a machine vision scheme is most similar to the physiological performance of the primate lateral geniculate nucleus rather than the striate cortex.
49 CFR 214.507 - Required safety equipment for new on-track roadway maintenance machines.
Code of Federal Regulations, 2011 CFR
2011-10-01
... glass, or other material with similar properties, if the machine is designed with a windshield. Each new... wipers or suitable alternatives that provide the machine operator an equivalent level of vision if...
Machine Vision-Based Measurement Systems for Fruit and Vegetable Quality Control in Postharvest.
Blasco, José; Munera, Sandra; Aleixos, Nuria; Cubero, Sergio; Molto, Enrique
Individual items of any agricultural commodity are different from each other in terms of colour, shape or size. Furthermore, as they are living thing, they change their quality attributes over time, thereby making the development of accurate automatic inspection machines a challenging task. Machine vision-based systems and new optical technologies make it feasible to create non-destructive control and monitoring tools for quality assessment to ensure adequate accomplishment of food standards. Such systems are much faster than any manual non-destructive examination of fruit and vegetable quality, thus allowing the whole production to be inspected with objective and repeatable criteria. Moreover, current technology makes it possible to inspect the fruit in spectral ranges beyond the sensibility of the human eye, for instance in the ultraviolet and near-infrared regions. Machine vision-based applications require the use of multiple technologies and knowledge, ranging from those related to image acquisition (illumination, cameras, etc.) to the development of algorithms for spectral image analysis. Machine vision-based systems for inspecting fruit and vegetables are targeted towards different purposes, from in-line sorting into commercial categories to the detection of contaminants or the distribution of specific chemical compounds on the product's surface. This chapter summarises the current state of the art in these techniques, starting with systems based on colour images for the inspection of conventional colour, shape or external defects and then goes on to consider recent developments in spectral image analysis for internal quality assessment or contaminant detection.
Machine vision system for measuring conifer seedling morphology
NASA Astrophysics Data System (ADS)
Rigney, Michael P.; Kranzler, Glenn A.
1995-01-01
A PC-based machine vision system providing rapid measurement of bare-root tree seedling morphological features has been designed. The system uses backlighting and a 2048-pixel line- scan camera to acquire images with transverse resolutions as high as 0.05 mm for precise measurement of stem diameter. Individual seedlings are manually loaded on a conveyor belt and inspected by the vision system in less than 0.25 seconds. Designed for quality control and morphological data acquisition by nursery personnel, the system provides a user-friendly, menu-driven graphical interface. The system automatically locates the seedling root collar and measures stem diameter, shoot height, sturdiness ratio, root mass length, projected shoot and root area, shoot-root area ratio, and percent fine roots. Sample statistics are computed for each measured feature. Measurements for each seedling may be stored for later analysis. Feature measurements may be compared with multi-class quality criteria to determine sample quality or to perform multi-class sorting. Statistical summary and classification reports may be printed to facilitate the communication of quality concerns with grading personnel. Tests were conducted at a commercial forest nursery to evaluate measurement precision. Four quality control personnel measured root collar diameter, stem height, and root mass length on each of 200 conifer seedlings. The same seedlings were inspected four times by the machine vision system. Machine stem diameter measurement precision was four times greater than that of manual measurements. Machine and manual measurements had comparable precision for shoot height and root mass length.
Can Humans Fly Action Understanding with Multiple Classes of Actors
2015-06-08
recognition using structure from motion point clouds. In European Conference on Computer Vision, 2008. [5] R. Caruana. Multitask learning. Machine Learning...tonomous driving ? the kitti vision benchmark suite. In IEEE Conference on Computer Vision and Pattern Recognition, 2012. [12] L. Gorelick, M. Blank
Neo-Symbiosis: The Next Stage in the Evolution of Human Information Interaction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Griffith, Douglas; Greitzer, Frank L.
We re-address the vision of human-computer symbiosis expressed by J. C. R. Licklider nearly a half-century ago, when he wrote: “The hope is that in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.” (Licklider, 1960). Unfortunately, little progress was made toward this vision over four decades following Licklider’s challenge, despite significant advancements in the fields of human factors and computer science. Licklider’s vision wasmore » largely forgotten. However, recent advances in information science and technology, psychology, and neuroscience have rekindled the potential of making the Licklider’s vision a reality. This paper provides a historical context for and updates the vision, and it argues that such a vision is needed as a unifying framework for advancing IS&T.« less
Reinforcement learning in computer vision
NASA Astrophysics Data System (ADS)
Bernstein, A. V.; Burnaev, E. V.
2018-04-01
Nowadays, machine learning has become one of the basic technologies used in solving various computer vision tasks such as feature detection, image segmentation, object recognition and tracking. In many applications, various complex systems such as robots are equipped with visual sensors from which they learn state of surrounding environment by solving corresponding computer vision tasks. Solutions of these tasks are used for making decisions about possible future actions. It is not surprising that when solving computer vision tasks we should take into account special aspects of their subsequent application in model-based predictive control. Reinforcement learning is one of modern machine learning technologies in which learning is carried out through interaction with the environment. In recent years, Reinforcement learning has been used both for solving such applied tasks as processing and analysis of visual information, and for solving specific computer vision problems such as filtering, extracting image features, localizing objects in scenes, and many others. The paper describes shortly the Reinforcement learning technology and its use for solving computer vision problems.
Foreword to the theme issue on geospatial computer vision
NASA Astrophysics Data System (ADS)
Wegner, Jan Dirk; Tuia, Devis; Yang, Michael; Mallet, Clement
2018-06-01
Geospatial Computer Vision has become one of the most prevalent emerging fields of investigation in Earth Observation in the last few years. In this theme issue, we aim at showcasing a number of works at the interface between remote sensing, photogrammetry, image processing, computer vision and machine learning. In light of recent sensor developments - both from the ground as from above - an unprecedented (and ever growing) quantity of geospatial data is available for tackling challenging and urgent tasks such as environmental monitoring (deforestation, carbon sequestration, climate change mitigation), disaster management, autonomous driving or the monitoring of conflicts. The new bottleneck for serving these applications is the extraction of relevant information from such large amounts of multimodal data. This includes sources, stemming from multiple sensors, that exhibit distinct physical nature of heterogeneous quality, spatial, spectral and temporal resolutions. They are as diverse as multi-/hyperspectral satellite sensors, color cameras on drones, laser scanning devices, existing open land-cover geodatabases and social media. Such core data processing is mandatory so as to generate semantic land-cover maps, accurate detection and trajectories of objects of interest, as well as by-products of superior added-value: georeferenced data, images with enhanced geometric and radiometric qualities, or Digital Surface and Elevation Models.
A computer architecture for intelligent machines
NASA Technical Reports Server (NTRS)
Lefebvre, D. R.; Saridis, G. N.
1992-01-01
The theory of intelligent machines proposes a hierarchical organization for the functions of an autonomous robot based on the principle of increasing precision with decreasing intelligence. An analytic formulation of this theory using information-theoretic measures of uncertainty for each level of the intelligent machine has been developed. The authors present a computer architecture that implements the lower two levels of the intelligent machine. The architecture supports an event-driven programming paradigm that is independent of the underlying computer architecture and operating system. Execution-level controllers for motion and vision systems are briefly addressed, as well as the Petri net transducer software used to implement coordination-level functions. A case study illustrates how this computer architecture integrates real-time and higher-level control of manipulator and vision systems.
Bi Sparsity Pursuit: A Paradigm for Robust Subspace Recovery
2016-09-27
16. SECURITY CLASSIFICATION OF: The success of sparse models in computer vision and machine learning is due to the fact that, high dimensional data...Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 Signal recovery, Sparse learning , Subspace modeling REPORT DOCUMENTATION PAGE 11...vision and machine learning is due to the fact that, high dimensional data is distributed in a union of low dimensional subspaces in many real-world
Balaban, M O; Aparicio, J; Zotarelli, M; Sims, C
2008-11-01
The average colors of mangos and apples were measured using machine vision. A method to quantify the perception of nonhomogeneous colors by sensory panelists was developed. Three colors out of several reference colors and their perceived percentage of the total sample area were selected by untrained panelists. Differences between the average colors perceived by panelists and those from the machine vision were reported as DeltaE values (color difference error). Effects of nonhomogeneity of color, and using real samples or their images in the sensory panels on DeltaE were evaluated. In general, samples with more nonuniform colors had higher DeltaE values, suggesting that panelists had more difficulty in evaluating more nonhomogeneous colors. There was no significant difference in DeltaE values between the real fruits and their screen image, therefore images can be used to evaluate color instead of the real samples.
Machine vision methods for use in grain variety discrimination and quality analysis
NASA Astrophysics Data System (ADS)
Winter, Philip W.; Sokhansanj, Shahab; Wood, Hugh C.
1996-12-01
Decreasing cost of computer technology has made it feasible to incorporate machine vision technology into the agriculture industry. The biggest attraction to using a machine vision system is the computer's ability to be completely consistent and objective. One use is in the variety discrimination and quality inspection of grains. Algorithms have been developed using Fourier descriptors and neural networks for use in variety discrimination of barley seeds. RGB and morphology features have been used in the quality analysis of lentils, and probability distribution functions and L,a,b color values for borage dockage testing. These methods have been shown to be very accurate and have a high potential for agriculture. This paper presents the techniques used and results obtained from projects including: a lentil quality discriminator, a barley variety classifier, a borage dockage tester, a popcorn quality analyzer, and a pistachio nut grading system.
NASA Astrophysics Data System (ADS)
Mahapatra, Prasant Kumar; Sethi, Spardha; Kumar, Amod
2015-10-01
In conventional tool positioning technique, sensors embedded in the motion stages provide the accurate tool position information. In this paper, a machine vision based system and image processing technique for motion measurement of lathe tool from two-dimensional sequential images captured using charge coupled device camera having a resolution of 250 microns has been described. An algorithm was developed to calculate the observed distance travelled by the tool from the captured images. As expected, error was observed in the value of the distance traversed by the tool calculated from these images. Optimization of errors due to machine vision system, calibration, environmental factors, etc. in lathe tool movement was carried out using two soft computing techniques, namely, artificial immune system (AIS) and particle swarm optimization (PSO). The results show better capability of AIS over PSO.
Generic decoding of seen and imagined objects using hierarchical visual features.
Horikawa, Tomoyasu; Kamitani, Yukiyasu
2017-05-22
Object recognition is a key function in both human and machine vision. While brain decoding of seen and imagined objects has been achieved, the prediction is limited to training examples. We present a decoding approach for arbitrary objects using the machine vision principle that an object category is represented by a set of features rendered invariant through hierarchical processing. We show that visual features, including those derived from a deep convolutional neural network, can be predicted from fMRI patterns, and that greater accuracy is achieved for low-/high-level features with lower-/higher-level visual areas, respectively. Predicted features are used to identify seen/imagined object categories (extending beyond decoder training) from a set of computed features for numerous object images. Furthermore, decoding of imagined objects reveals progressive recruitment of higher-to-lower visual representations. Our results demonstrate a homology between human and machine vision and its utility for brain-based information retrieval.
Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur
2012-01-01
This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system. PMID:22736956
Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur
2012-01-01
This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system.
Performance evaluation of various classifiers for color prediction of rice paddy plant leaf
NASA Astrophysics Data System (ADS)
Singh, Amandeep; Singh, Maninder Lal
2016-11-01
The food industry is one of the industries that uses machine vision for a nondestructive quality evaluation of the produce. These quality measuring systems and softwares are precalculated on the basis of various image-processing algorithms which generally use a particular type of classifier. These classifiers play a vital role in making the algorithms so intelligent that it can contribute its best while performing the said quality evaluations by translating the human perception into machine vision and hence machine learning. The crop of interest is rice, and the color of this crop indicates the health status of the plant. An enormous number of classifiers are available to solve the purpose of color prediction, but choosing the best among them is the focus of this paper. Performance of a total of 60 classifiers has been analyzed from the application point of view, and the results have been discussed. The motivation comes from the idea of providing a set of classifiers with excellent performance and implementing them on a single algorithm for the improvement of machine vision learning and, hence, associated applications.
Landforms of the conterminous United States: a digital shaded-relief portrayal
Thelin, Gail P.; Pike, Richard J.
1991-01-01
Our map was made by digital image-processing, a technical specialty related to the broader fields of computer graphics and machine vision (Dawson, 1987; Kennie and McLaren, 1988). The technology includes the many spacially based operations first brought together and developed systematically to manipulate Ranger, Mariner, Landsat, and other images that are reassembled from spacecraft telemetry in a raster or scan-line arrangement of square-grid elements (Nathan, 1966; Castleman, 1979; Sheldon, 1987). These computer procedures have been successfully transferred to landform analysis from remote-sensing applications by substituting terrain heights or sea-floor depths for the customary values of electromagnetic radiation obtained from satellites an stored in digital arrays of pixels (Batson and others, 1975).
Vision sensing techniques in aeronautics and astronautics
NASA Technical Reports Server (NTRS)
Hall, E. L.
1988-01-01
The close relationship between sensing and other tasks in orbital space, and the integral role of vision sensing in practical aerospace applications, are illustrated. Typical space mission-vision tasks encompass the docking of space vehicles, the detection of unexpected objects, the diagnosis of spacecraft damage, and the inspection of critical spacecraft components. Attention is presently given to image functions, the 'windowing' of a view, the number of cameras required for inspection tasks, the choice of incoherent or coherent (laser) illumination, three-dimensional-to-two-dimensional model-matching, edge- and region-segmentation techniques, and motion analysis for tracking.
Methods and Research for Multi-Component Cutting Force Sensing Devices and Approaches in Machining
Liang, Qiaokang; Zhang, Dan; Wu, Wanneng; Zou, Kunlin
2016-01-01
Multi-component cutting force sensing systems in manufacturing processes applied to cutting tools are gradually becoming the most significant monitoring indicator. Their signals have been extensively applied to evaluate the machinability of workpiece materials, predict cutter breakage, estimate cutting tool wear, control machine tool chatter, determine stable machining parameters, and improve surface finish. Robust and effective sensing systems with capability of monitoring the cutting force in machine operations in real time are crucial for realizing the full potential of cutting capabilities of computer numerically controlled (CNC) tools. The main objective of this paper is to present a brief review of the existing achievements in the field of multi-component cutting force sensing systems in modern manufacturing. PMID:27854322
Methods and Research for Multi-Component Cutting Force Sensing Devices and Approaches in Machining.
Liang, Qiaokang; Zhang, Dan; Wu, Wanneng; Zou, Kunlin
2016-11-16
Multi-component cutting force sensing systems in manufacturing processes applied to cutting tools are gradually becoming the most significant monitoring indicator. Their signals have been extensively applied to evaluate the machinability of workpiece materials, predict cutter breakage, estimate cutting tool wear, control machine tool chatter, determine stable machining parameters, and improve surface finish. Robust and effective sensing systems with capability of monitoring the cutting force in machine operations in real time are crucial for realizing the full potential of cutting capabilities of computer numerically controlled (CNC) tools. The main objective of this paper is to present a brief review of the existing achievements in the field of multi-component cutting force sensing systems in modern manufacturing.
Software architecture for time-constrained machine vision applications
NASA Astrophysics Data System (ADS)
Usamentiaga, Rubén; Molleda, Julio; García, Daniel F.; Bulnes, Francisco G.
2013-01-01
Real-time image and video processing applications require skilled architects, and recent trends in the hardware platform make the design and implementation of these applications increasingly complex. Many frameworks and libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing applications. However, they tend to lack flexibility, because they are normally oriented toward particular types of applications, or they impose specific data processing models such as the pipeline. Other issues include large memory footprints, difficulty for reuse, and inefficient execution on multicore processors. We present a novel software architecture for time-constrained machine vision applications that addresses these issues. The architecture is divided into three layers. The platform abstraction layer provides a high-level application programming interface for the rest of the architecture. The messaging layer provides a message-passing interface based on a dynamic publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route the messages from the publishers to the subscribers interested in a particular type of message. The application layer provides a repository for reusable application modules designed for machine vision applications. These modules, which include acquisition, visualization, communication, user interface, and data processing, take advantage of the power of well-known libraries such as OpenCV, Intel IPP, or CUDA. Finally, the proposed architecture is applied to a real machine vision application: a jam detector for steel pickling lines.
Remote sensing of vegetation structure using computer vision
NASA Astrophysics Data System (ADS)
Dandois, Jonathan P.
High-spatial resolution measurements of vegetation structure are needed for improving understanding of ecosystem carbon, water and nutrient dynamics, the response of ecosystems to a changing climate, and for biodiversity mapping and conservation, among many research areas. Our ability to make such measurements has been greatly enhanced by continuing developments in remote sensing technology---allowing researchers the ability to measure numerous forest traits at varying spatial and temporal scales and over large spatial extents with minimal to no field work, which is costly for large spatial areas or logistically difficult in some locations. Despite these advances, there remain several research challenges related to the methods by which three-dimensional (3D) and spectral datasets are joined (remote sensing fusion) and the availability and portability of systems for frequent data collections at small scale sampling locations. Recent advances in the areas of computer vision structure from motion (SFM) and consumer unmanned aerial systems (UAS) offer the potential to address these challenges by enabling repeatable measurements of vegetation structural and spectral traits at the scale of individual trees. However, the potential advances offered by computer vision remote sensing also present unique challenges and questions that need to be addressed before this approach can be used to improve understanding of forest ecosystems. For computer vision remote sensing to be a valuable tool for studying forests, bounding information about the characteristics of the data produced by the system will help researchers understand and interpret results in the context of the forest being studied and of other remote sensing techniques. This research advances understanding of how forest canopy and tree 3D structure and color are accurately measured by a relatively low-cost and portable computer vision personal remote sensing system: 'Ecosynth'. Recommendations are made for optimal conditions under which forest structure measurements should be obtained with UAS-SFM remote sensing. Ultimately remote sensing of vegetation by computer vision offers the potential to provide an 'ecologist's eye view', capturing not only canopy 3D and spectral properties, but also seeing the trees in the forest and the leaves on the trees.
Multi-parameter monitoring of electrical machines using integrated fibre Bragg gratings
NASA Astrophysics Data System (ADS)
Fabian, Matthias; Hind, David; Gerada, Chris; Sun, Tong; Grattan, Kenneth T. V.
2017-04-01
In this paper a sensor system for multi-parameter electrical machine condition monitoring is reported. The proposed FBG-based system allows for the simultaneous monitoring of machine vibration, rotor speed and position, torque, spinning direction, temperature distribution along the stator windings and on the rotor surface as well as the stator wave frequency. This all-optical sensing solution reduces the component count of conventional sensor systems, i.e., all 48 sensing elements are contained within the machine operated by a single sensing interrogation unit. In this work, the sensing system has been successfully integrated into and tested on a permanent magnet motor prototype.
ERIC Educational Resources Information Center
Ch'ien, Evelyn
2011-01-01
This paper describes how a linguistic form, rap, can evolve in tandem with technological advances and manifest human-machine creativity. Rather than assuming that the interplay between machines and technology makes humans robotic or machine-like, the paper explores how the pressure of executing artistic visions using technology can drive…
Robot Guidance Using Machine Vision Techniques in Industrial Environments: A Comparative Review.
Pérez, Luis; Rodríguez, Íñigo; Rodríguez, Nuria; Usamentiaga, Rubén; García, Daniel F
2016-03-05
In the factory of the future, most of the operations will be done by autonomous robots that need visual feedback to move around the working space avoiding obstacles, to work collaboratively with humans, to identify and locate the working parts, to complete the information provided by other sensors to improve their positioning accuracy, etc. Different vision techniques, such as photogrammetry, stereo vision, structured light, time of flight and laser triangulation, among others, are widely used for inspection and quality control processes in the industry and now for robot guidance. Choosing which type of vision system to use is highly dependent on the parts that need to be located or measured. Thus, in this paper a comparative review of different machine vision techniques for robot guidance is presented. This work analyzes accuracy, range and weight of the sensors, safety, processing time and environmental influences. Researchers and developers can take it as a background information for their future works.
Robot Guidance Using Machine Vision Techniques in Industrial Environments: A Comparative Review
Pérez, Luis; Rodríguez, Íñigo; Rodríguez, Nuria; Usamentiaga, Rubén; García, Daniel F.
2016-01-01
In the factory of the future, most of the operations will be done by autonomous robots that need visual feedback to move around the working space avoiding obstacles, to work collaboratively with humans, to identify and locate the working parts, to complete the information provided by other sensors to improve their positioning accuracy, etc. Different vision techniques, such as photogrammetry, stereo vision, structured light, time of flight and laser triangulation, among others, are widely used for inspection and quality control processes in the industry and now for robot guidance. Choosing which type of vision system to use is highly dependent on the parts that need to be located or measured. Thus, in this paper a comparative review of different machine vision techniques for robot guidance is presented. This work analyzes accuracy, range and weight of the sensors, safety, processing time and environmental influences. Researchers and developers can take it as a background information for their future works. PMID:26959030
Romero, Peggy; Miller, Ted; Garakani, Arman
2009-12-01
Current methods to assess neurodegradation in dorsal root ganglion cultures as a model for neurodegenerative diseases are imprecise and time-consuming. Here we describe two new methods to quantify neuroprotection in these cultures. The neurite quality index (NQI) builds upon earlier manual methods, incorporating additional morphological events to increase detection sensitivity for the detection of early degeneration events. Neurosight is a machine vision-based method that recapitulates many of the strengths of NQI while enabling high-throughput screening applications with decreased costs.
Method and system for controlling a permanent magnet machine
Walters, James E.
2003-05-20
Method and system for controlling the start of a permanent magnet machine are provided. The method allows to assign a parameter value indicative of an estimated initial rotor position of the machine. The method further allows to energize the machine with a level of current being sufficiently high to start rotor motion in a desired direction in the event the initial rotor position estimate is sufficiently close to the actual rotor position of the machine. A sensing action allows to sense whether any incremental changes in rotor position occur in response to the energizing action. In the event no changes in rotor position are sensed, the method allows to incrementally adjust the estimated rotor position by a first set of angular values until changes in rotor position are sensed. In the event changes in rotor position are sensed, the method allows to provide a rotor alignment signal as rotor motion continues. The alignment signal allows to align the estimated rotor position relative to the actual rotor position. This alignment action allows for operating the machine over a wide speed range.
Marsden, Janet
2016-09-21
Rationale and key points An objective assessment of the patient's vision is important to assess variation from 'normal' vision in acute and community settings, to establish a baseline before examination and treatment in the emergency department, and to assess any changes during ophthalmic outpatient appointments. » Vision is one of the essential senses that permits people to make sense of the world. » Visual assessment does not only involve measuring central visual acuity, it also involves assessing the consequences of reduced vision. » Assessment of vision in children is crucial to identify issues that might affect vision and visual development, and to optimise lifelong vision. » Untreatable loss of vision is not an inevitable consequence of ageing. » Timely and repeated assessment of vision over life can reduce the incidence of falls, prevent injury and optimise independence. Reflective activity 'How to' articles can help update you practice and ensure it remains evidence based. Apply this article to your practice. Reflect on and write a short account of: 1. How this article might change your practice when assessing people holistically. 2. How you could use this article to educate your colleagues in the assessment of vision.
Artificial Intelligence/Robotics Applications to Navy Aircraft Maintenance.
1984-06-01
other automatic machinery such as presses, molding machines , and numerically-controlled machine tools, just as people do. A-36...Robotics Technologies 3 B. Relevant AI Technologies 4 1. Expert Systems 4 2. Automatic Planning 4 3. Natural Language 5 4. Machine Vision...building machines that imitate human behavior. Artificial intelligence is concerned with the functions of the brain, whereas robotics include, in
Automatic Training of Rat Cyborgs for Navigation.
Yu, Yipeng; Wu, Zhaohui; Xu, Kedi; Gong, Yongyue; Zheng, Nenggan; Zheng, Xiaoxiang; Pan, Gang
2016-01-01
A rat cyborg system refers to a biological rat implanted with microelectrodes in its brain, via which the outer electrical stimuli can be delivered into the brain in vivo to control its behaviors. Rat cyborgs have various applications in emergency, such as search and rescue in disasters. Prior to a rat cyborg becoming controllable, a lot of effort is required to train it to adapt to the electrical stimuli. In this paper, we build a vision-based automatic training system for rat cyborgs to replace the time-consuming manual training procedure. A hierarchical framework is proposed to facilitate the colearning between rats and machines. In the framework, the behavioral states of a rat cyborg are visually sensed by a camera, a parameterized state machine is employed to model the training action transitions triggered by rat's behavioral states, and an adaptive adjustment policy is developed to adaptively adjust the stimulation intensity. The experimental results of three rat cyborgs prove the effectiveness of our system. To the best of our knowledge, this study is the first to tackle automatic training of animal cyborgs.
Automatic Training of Rat Cyborgs for Navigation
Yu, Yipeng; Wu, Zhaohui; Xu, Kedi; Gong, Yongyue; Zheng, Nenggan; Zheng, Xiaoxiang; Pan, Gang
2016-01-01
A rat cyborg system refers to a biological rat implanted with microelectrodes in its brain, via which the outer electrical stimuli can be delivered into the brain in vivo to control its behaviors. Rat cyborgs have various applications in emergency, such as search and rescue in disasters. Prior to a rat cyborg becoming controllable, a lot of effort is required to train it to adapt to the electrical stimuli. In this paper, we build a vision-based automatic training system for rat cyborgs to replace the time-consuming manual training procedure. A hierarchical framework is proposed to facilitate the colearning between rats and machines. In the framework, the behavioral states of a rat cyborg are visually sensed by a camera, a parameterized state machine is employed to model the training action transitions triggered by rat's behavioral states, and an adaptive adjustment policy is developed to adaptively adjust the stimulation intensity. The experimental results of three rat cyborgs prove the effectiveness of our system. To the best of our knowledge, this study is the first to tackle automatic training of animal cyborgs. PMID:27436999
Techniques calm fear of imaging machine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Pelt, D.
1990-04-02
Magnetic resonance imaging has become a valuable tool in diagnosing diseases, and the imaging devices are now used as often as 2 million times a year in the United States. But as many as 10 percent of patients advised to undergo the procedure cannot because they become overwhelmed with claustrophobialike fear triggered by having to lie motionless in the machine's tunnel-like cylinder for about 45 minutes. To counteract this fear, several hospitals now practice various techniques to help reduce the feelings of confinement. One popular method is to give a patient special eyeglasses that allow him to look beyond hismore » feet and see the tunnel opening. Other glasses use mirrors to direct the patient's vision out the back of the unit to large wilderness photographs or murals that simulate a sense of spaciousness. Even a basic item like a set of headphones that plays music can often distract a patient, and technicians frequently hold a patient's hand or foot during the procedure. Another trick is to invite family members and friends to remain with the patient during the scan to provide company and reassurance.« less
Maidenbaum, Shachar; Abboud, Sami; Amedi, Amir
2014-04-01
Sensory substitution devices (SSDs) have come a long way since first developed for visual rehabilitation. They have produced exciting experimental results, and have furthered our understanding of the human brain. Unfortunately, they are still not used for practical visual rehabilitation, and are currently considered as reserved primarily for experiments in controlled settings. Over the past decade, our understanding of the neural mechanisms behind visual restoration has changed as a result of converging evidence, much of which was gathered with SSDs. This evidence suggests that the brain is more than a pure sensory-machine but rather is a highly flexible task-machine, i.e., brain regions can maintain or regain their function in vision even with input from other senses. This complements a recent set of more promising behavioral achievements using SSDs and new promising technologies and tools. All these changes strongly suggest that the time has come to revive the focus on practical visual rehabilitation with SSDs and we chart several key steps in this direction such as training protocols and self-train tools. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Development of machine-vision system for gap inspection of muskmelon grafted seedlings.
Liu, Siyao; Xing, Zuochang; Wang, Zifan; Tian, Subo; Jahun, Falalu Rabiu
2017-01-01
Grafting robots have been developed in the world, but some auxiliary works such as gap-inspecting for grafted seedlings still need to be done by human. An machine-vision system of gap inspection for grafted muskmelon seedlings was developed in this study. The image acquiring system consists of a CCD camera, a lens and a front white lighting source. The image of inspected gap was processed and analyzed by software of HALCON 12.0. The recognition algorithm for the system is based on principle of deformable template matching. A template should be created from an image of qualified grafted seedling gap. Then the gap image of the grafted seedling will be compared with the created template to determine their matching degree. Based on the similarity between the gap image of grafted seedling and the template, the matching degree will be 0 to 1. The less similar for the grafted seedling gap with the template the smaller of matching degree. Thirdly, the gap will be output as qualified or unqualified. If the matching degree of grafted seedling gap and the template is less than 0.58, or there is no match is found, the gap will be judged as unqualified; otherwise the gap will be qualified. Finally, 100 muskmelon seedlings were grafted and inspected to test the gap inspection system. Results showed that the gap inspection machine-vision system could recognize the gap qualification correctly as 98% of human vision. And the inspection speed of this system can reach 15 seedlings·min-1. The gap inspection process in grafting can be fully automated with this developed machine-vision system, and the gap inspection system will be a key step of a fully-automatic grafting robots.
Newton, Jenny; Barrett, Steven F; Wilcox, Michael J; Popp, Stephanie
2002-01-01
Machine vision for navigational purposes is a rapidly growing field. Many abilities such as object recognition and target tracking rely on vision. Autonomous vehicles must be able to navigate in dynamic enviroments and simultaneously locate a target position. Traditional machine vision often fails to react in real time because of large computational requirements whereas the fly achieves complex orientation and navigation with a relatively small and simple brain. Understanding how the fly extracts visual information and how neurons encode and process information could lead us to a new approach for machine vision applications. Photoreceptors in the Musca domestica eye that share the same spatial information converge into a structure called the cartridge. The cartridge consists of the photoreceptor axon terminals and monopolar cells L1, L2, and L4. It is thought that L1 and L2 cells encode edge related information relative to a single cartridge. These cells are thought to be equivalent to vertebrate bipolar cells, producing contrast enhancement and reduction of information sent to L4. Monopolar cell L4 is thought to perform image segmentation on the information input from L1 and L2 and also enhance edge detection. A mesh of interconnected L4's would correlate the output from L1 and L2 cells of adjacent cartridges and provide a parallel network for segmenting an object's edges. The focus of this research is to excite photoreceptors of the common housefly, Musca domestica, with different visual patterns. The electrical response of monopolar cells L1, L2, and L4 will be recorded using intracellular recording techniques. Signal analysis will determine the neurocircuitry to detect and segment images.
NASA Astrophysics Data System (ADS)
Tellaeche, A.; Arana, R.; Ibarguren, A.; Martínez-Otzeta, J. M.
The exhaustive quality control is becoming very important in the world's globalized market. One of these examples where quality control becomes critical is the percussion cap mass production. These elements must achieve a minimum tolerance deviation in their fabrication. This paper outlines a machine vision development using a 3D camera for the inspection of the whole production of percussion caps. This system presents multiple problems, such as metallic reflections in the percussion caps, high speed movement of the system and mechanical errors and irregularities in percussion cap placement. Due to these problems, it is impossible to solve the problem by traditional image processing methods, and hence, machine learning algorithms have been tested to provide a feasible classification of the possible errors present in the percussion caps.
Machine vision inspection of lace using a neural network
NASA Astrophysics Data System (ADS)
Sanby, Christopher; Norton-Wayne, Leonard
1995-03-01
Lace is particularly difficult to inspect using machine vision since it comprises a fine and complex pattern of threads which must be verified, on line and in real time. Small distortions in the pattern are unavoidable. This paper describes instrumentation for inspecting lace actually on the knitting machine. A CCD linescan camera synchronized to machine motions grabs an image of the lace. Differences between this lace image and a perfect prototype image are detected by comparison methods, thresholding techniques, and finally, a neural network (to distinguish real defects from false alarms). Though produced originally in a laboratory on SUN Sparc work-stations, the processing has subsequently been implemented on a 50 Mhz 486 PC-look-alike. Successful operation has been demonstrated in a factory, but over a restricted width. Full width coverage awaits provision of faster processing.
Experiences Using an Open Source Software Library to Teach Computer Vision Subjects
ERIC Educational Resources Information Center
Cazorla, Miguel; Viejo, Diego
2015-01-01
Machine vision is an important subject in computer science and engineering degrees. For laboratory experimentation, it is desirable to have a complete and easy-to-use tool. In this work we present a Java library, oriented to teaching computer vision. We have designed and built the library from the scratch with emphasis on readability and…
Feature recognition and detection for ancient architecture based on machine vision
NASA Astrophysics Data System (ADS)
Zou, Zheng; Wang, Niannian; Zhao, Peng; Zhao, Xuefeng
2018-03-01
Ancient architecture has a very high historical and artistic value. The ancient buildings have a wide variety of textures and decorative paintings, which contain a lot of historical meaning. Therefore, the research and statistics work of these different compositional and decorative features play an important role in the subsequent research. However, until recently, the statistics of those components are mainly by artificial method, which consumes a lot of labor and time, inefficiently. At present, as the strong support of big data and GPU accelerated training, machine vision with deep learning as the core has been rapidly developed and widely used in many fields. This paper proposes an idea to recognize and detect the textures, decorations and other features of ancient building based on machine vision. First, classify a large number of surface textures images of ancient building components manually as a set of samples. Then, using the convolution neural network to train the samples in order to get a classification detector. Finally verify its precision.
A low-cost machine vision system for the recognition and sorting of small parts
NASA Astrophysics Data System (ADS)
Barea, Gustavo; Surgenor, Brian W.; Chauhan, Vedang; Joshi, Keyur D.
2018-04-01
An automated machine vision-based system for the recognition and sorting of small parts was designed, assembled and tested. The system was developed to address a need to expose engineering students to the issues of machine vision and assembly automation technology, with readily available and relatively low-cost hardware and software. This paper outlines the design of the system and presents experimental performance results. Three different styles of plastic gears, together with three different styles of defective gears, were used to test the system. A pattern matching tool was used for part classification. Nine experiments were conducted to demonstrate the effects of changing various hardware and software parameters, including: conveyor speed, gear feed rate, classification, and identification score thresholds. It was found that the system could achieve a maximum system accuracy of 95% at a feed rate of 60 parts/min, for a given set of parameter settings. Future work will be looking at the effect of lighting.
An Automated Classification Technique for Detecting Defects in Battery Cells
NASA Technical Reports Server (NTRS)
McDowell, Mark; Gray, Elizabeth
2006-01-01
Battery cell defect classification is primarily done manually by a human conducting a visual inspection to determine if the battery cell is acceptable for a particular use or device. Human visual inspection is a time consuming task when compared to an inspection process conducted by a machine vision system. Human inspection is also subject to human error and fatigue over time. We present a machine vision technique that can be used to automatically identify defective sections of battery cells via a morphological feature-based classifier using an adaptive two-dimensional fast Fourier transformation technique. The initial area of interest is automatically classified as either an anode or cathode cell view as well as classified as an acceptable or a defective battery cell. Each battery cell is labeled and cataloged for comparison and analysis. The result is the implementation of an automated machine vision technique that provides a highly repeatable and reproducible method of identifying and quantifying defects in battery cells.
Thermographic Sensing For On-Line Industrial Control
NASA Astrophysics Data System (ADS)
Holmsten, Dag
1986-10-01
It is today's emergence of thermoelectrically cooled, highly accurate infrared linescanners and imaging systems that has definitely made on-line Infraread Thermography (IRT) possible. Specifically designed for continuous use, these scanners are equipped with dedicated software capable of monitoring and controlling highly complex thermodynamic situations. This paper will outline some possible implications of using IRT on-line by describing some uses of this technology in the steel-making (hot rolling) and automotive industries (machine-vision). A warning is also expressed that IRT technology not originally designed for automated applications e.g. high resolution, imaging systems, should not be directly applied to an on-line measurement situation without having its measurement resolution, accuracy and especially its repeatability, reliably proven. Some suitable testing procedures are briefly outlined at the end of the paper.
An active role for machine learning in drug development
Murphy, Robert F.
2014-01-01
Due to the complexity of biological systems, cutting-edge machine-learning methods will be critical for future drug development. In particular, machine-vision methods to extract detailed information from imaging assays and active-learning methods to guide experimentation will be required to overcome the dimensionality problem in drug development. PMID:21587249
Using machine learning to emulate human hearing for predictive maintenance of equipment
NASA Astrophysics Data System (ADS)
Verma, Dinesh; Bent, Graham
2017-05-01
At the current time, interfaces between humans and machines use only a limited subset of senses that humans are capable of. The interaction among humans and computers can become much more intuitive and effective if we are able to use more senses, and create other modes of communicating between them. New machine learning technologies can make this type of interaction become a reality. In this paper, we present a framework for a holistic communication between humans and machines that uses all of the senses, and discuss how a subset of this capability can allow machines to talk to humans to indicate their health for various tasks such as predictive maintenance.
Color machine vision system for process control in the ceramics industry
NASA Astrophysics Data System (ADS)
Penaranda Marques, Jose A.; Briones, Leoncio; Florez, Julian
1997-08-01
This paper is focused on the design of a machine vision system to solve a problem found in the manufacturing process of high quality polished porcelain tiles. This consists of sorting the tiles according to the criteria 'same appearance to the human eye' or in other words, by color and visual texture. In 1994 this problem was tackled and led to a prototype which became fully operational at production scale in a manufacturing plant, named Porcelanatto, S.A. The system has evolved and has been adapted to meet the particular needs of this manufacturing company. Among the main issues that have been improved, it is worth pointing out: (1) improvement to discern subtle variations in color or texture, which are the main features of the visual appearance; (2) inspection time reduction, as a result of algorithm optimization and the increasing computing power. Thus, 100 percent of the production can be inspected, reaching a maximum of 120 tiles/sec.; (3) adaptation to the different types and models of tiles manufactured. The tiles vary not only in their visible patterns but also in dimensions, formats, thickness and allowances. In this sense, one major problem has been reaching an optimal compromise: The system must be sensitive enough to discern subtle variations in color, but at the same time insensitive thickness variations in the tiles. The following parts have been used to build the system: RGB color line scan camera, 12 bits per channel, PCI frame grabber, PC, fiber optic based illumination and the algorithm which will be explained in section 4.
Sensory Substitution and Multimodal Mental Imagery.
Nanay, Bence
2017-09-01
Many philosophers use findings about sensory substitution devices in the grand debate about how we should individuate the senses. The big question is this: Is "vision" assisted by (tactile) sensory substitution really vision? Or is it tactile perception? Or some sui generis novel form of perception? My claim is that sensory substitution assisted "vision" is neither vision nor tactile perception, because it is not perception at all. It is mental imagery: visual mental imagery triggered by tactile sensory stimulation. But it is a special form of mental imagery that is triggered by corresponding sensory stimulation in a different sense modality, which I call "multimodal mental imagery."
Machine Learning Techniques in Clinical Vision Sciences.
Caixinha, Miguel; Nunes, Sandrina
2017-01-01
This review presents and discusses the contribution of machine learning techniques for diagnosis and disease monitoring in the context of clinical vision science. Many ocular diseases leading to blindness can be halted or delayed when detected and treated at its earliest stages. With the recent developments in diagnostic devices, imaging and genomics, new sources of data for early disease detection and patients' management are now available. Machine learning techniques emerged in the biomedical sciences as clinical decision-support techniques to improve sensitivity and specificity of disease detection and monitoring, increasing objectively the clinical decision-making process. This manuscript presents a review in multimodal ocular disease diagnosis and monitoring based on machine learning approaches. In the first section, the technical issues related to the different machine learning approaches will be present. Machine learning techniques are used to automatically recognize complex patterns in a given dataset. These techniques allows creating homogeneous groups (unsupervised learning), or creating a classifier predicting group membership of new cases (supervised learning), when a group label is available for each case. To ensure a good performance of the machine learning techniques in a given dataset, all possible sources of bias should be removed or minimized. For that, the representativeness of the input dataset for the true population should be confirmed, the noise should be removed, the missing data should be treated and the data dimensionally (i.e., the number of parameters/features and the number of cases in the dataset) should be adjusted. The application of machine learning techniques in ocular disease diagnosis and monitoring will be presented and discussed in the second section of this manuscript. To show the clinical benefits of machine learning in clinical vision sciences, several examples will be presented in glaucoma, age-related macular degeneration, and diabetic retinopathy, these ocular pathologies being the major causes of irreversible visual impairment.
Eyes for Learning: Preventing and Curing Vision-Related Learning Problems
ERIC Educational Resources Information Center
Orfield, Antonia
2007-01-01
Dr. Orfield's highly readable guide on vision development presents ground-breaking solutions to common learning problems and is supported by substantial data. This holistic common sense--that most people do not know--is not just about vision but also how vision is interrelated with learning. It teaches how to care for a child's vision as well as…
A computer architecture for intelligent machines
NASA Technical Reports Server (NTRS)
Lefebvre, D. R.; Saridis, G. N.
1991-01-01
The Theory of Intelligent Machines proposes a hierarchical organization for the functions of an autonomous robot based on the Principle of Increasing Precision With Decreasing Intelligence. An analytic formulation of this theory using information-theoretic measures of uncertainty for each level of the intelligent machine has been developed in recent years. A computer architecture that implements the lower two levels of the intelligent machine is presented. The architecture supports an event-driven programming paradigm that is independent of the underlying computer architecture and operating system. Details of Execution Level controllers for motion and vision systems are addressed, as well as the Petri net transducer software used to implement Coordination Level functions. Extensions to UNIX and VxWorks operating systems which enable the development of a heterogeneous, distributed application are described. A case study illustrates how this computer architecture integrates real-time and higher-level control of manipulator and vision systems.
Auto-SEIA: simultaneous optimization of image processing and machine learning algorithms
NASA Astrophysics Data System (ADS)
Negro Maggio, Valentina; Iocchi, Luca
2015-02-01
Object classification from images is an important task for machine vision and it is a crucial ingredient for many computer vision applications, ranging from security and surveillance to marketing. Image based object classification techniques properly integrate image processing and machine learning (i.e., classification) procedures. In this paper we present a system for automatic simultaneous optimization of algorithms and parameters for object classification from images. More specifically, the proposed system is able to process a dataset of labelled images and to return a best configuration of image processing and classification algorithms and of their parameters with respect to the accuracy of classification. Experiments with real public datasets are used to demonstrate the effectiveness of the developed system.
The Intangible Assets Advantages in the Machine Vision Inspection of Thermoplastic Materials
NASA Astrophysics Data System (ADS)
Muntean, Diana; Răulea, Andreea Simina
2017-12-01
Innovation is not a simple concept but is the main source of success. It is more important to have the right people and mindsets in place than to have a perfectly crafted plan in order to make the most out of an idea or business. The aim of this paper is to emphasize the importance of intangible assets when it comes to machine vision inspection of thermoplastic materials pointing out some aspects related to knowledge based assets and their need for a success idea to be developed in a successful product.
ERIC Educational Resources Information Center
Perkins, Alison; Brewer, Carol
2010-01-01
Insect vision is an area of active research that allows fruitful exploration into the nature of the scientific endeavor because of the bias our own vision brings. As scientists, we use our senses to make observations, but we can't assume that what we see is what insects see; we are forced to think outside of our own senses when we ask questions…
Steering of an automated vehicle in an unstructured environment
NASA Astrophysics Data System (ADS)
Kanakaraju, Sampath; Shanmugasundaram, Sathish K.; Thyagarajan, Ramesh; Hall, Ernest L.
1999-08-01
The purpose of this paper is to describe a high-level path planning logic, which processes the data from a vision system and an ultrasonic obstacle avoidance system and steers an autonomous mobile robot between obstacles. The test bed was an autonomous root built at University of Cincinnati, and this logic was tested and debugged on this machine. Attempts have already been made to incorporate fuzzy system on a similar robot, and this paper extends them to take advantage of the robot's ZTR capability. Using the integrated vision syste, the vehicle senses its location and orientation. A rotating ultrasonic sensor is used to map the location and size of possible obstacles. With these inputs the fuzzy logic controls the speed and the steering decisions of the robot. With the incorporation of this logic, it has been observed that Bearcat II has been very successful in avoiding obstacles very well. This was achieved in the Ground Robotics Competition conducted by the AUVS in June 1999, where it travelled a distance of 154 feet in a 10ft. wide path ridden with obstacles. This logic proved to be a significant contributing factor in this feat of Bearcat II.
Understanding human visual systems and its impact on our intelligent instruments
NASA Astrophysics Data System (ADS)
Strojnik Scholl, Marija; Páez, Gonzalo; Scholl, Michelle K.
2013-09-01
We review the evolution of machine vision and comment on the cross-fertilization from the neural sciences onto flourishing fields of neural processing, parallel processing, and associative memory in optical sciences and computing. Then we examine how the intensive efforts in mapping the human brain have been influenced by concepts in computer sciences, control theory, and electronic circuits. We discuss two neural paths that employ the input from the vision sense to determine the navigational options and object recognition. They are ventral temporal pathway for object recognition (what?) and dorsal parietal pathway for navigation (where?), respectively. We describe the reflexive and conscious decision centers in cerebral cortex involved with visual attention and gaze control. Interestingly, these require return path though the midbrain for ocular muscle control. We find that the cognitive psychologists currently study human brain employing low-spatial-resolution fMRI with temporal response on the order of a second. In recent years, the life scientists have concentrated on insect brains to study neural processes. We discuss how reflexive and conscious gaze-control decisions are made in the frontal eye field and inferior parietal lobe, constituting the fronto-parietal attention network. We note that ethical and experiential learnings impact our conscious decisions.
Machine vision system: a tool for quality inspection of food and agricultural products.
Patel, Krishna Kumar; Kar, A; Jha, S N; Khan, M A
2012-04-01
Quality inspection of food and agricultural produce are difficult and labor intensive. Simultaneously, with increased expectations for food products of high quality and safety standards, the need for accurate, fast and objective quality determination of these characteristics in food products continues to grow. However, these operations generally in India are manual which is costly as well as unreliable because human decision in identifying quality factors such as appearance, flavor, nutrient, texture, etc., is inconsistent, subjective and slow. Machine vision provides one alternative for an automated, non-destructive and cost-effective technique to accomplish these requirements. This inspection approach based on image analysis and processing has found a variety of different applications in the food industry. Considerable research has highlighted its potential for the inspection and grading of fruits and vegetables, grain quality and characteristic examination and quality evaluation of other food products like bakery products, pizza, cheese, and noodles etc. The objective of this paper is to provide in depth introduction of machine vision system, its components and recent work reported on food and agricultural produce.
Design of apochromatic lens with large field and high definition for machine vision.
Yang, Ao; Gao, Xingyu; Li, Mingfeng
2016-08-01
Precise machine vision detection for a large object at a finite working distance (WD) requires that the lens has a high resolution for a large field of view (FOV). In this case, the effect of a secondary spectrum on image quality is not negligible. According to the detection requirements, a high resolution apochromatic objective is designed and analyzed. The initial optical structure (IOS) is combined with three segments. Next, the secondary spectrum of the IOS is corrected by replacing glasses using the dispersion vector analysis method based on the Buchdahl dispersion equation. Other aberrations are optimized by the commercial optical design software ZEMAX by properly choosing the optimization function operands. The optimized optical structure (OOS) has an f-number (F/#) of 3.08, a FOV of φ60 mm, a WD of 240 mm, and a modulated transfer function (MTF) of all fields of more than 0.1 at 320 cycles/mm. The design requirements for a nonfluorite material apochromatic objective lens with a large field and high definition for machine vision detection have been achieved.
Broiler weight estimation based on machine vision and artificial neural network.
Amraei, S; Abdanan Mehdizadeh, S; Salari, S
2017-04-01
1. Machine vision and artificial neural network (ANN) procedures were used to estimate live body weight of broiler chickens in 30 1-d-old broiler chickens reared for 42 d. 2. Imaging was performed two times daily. To localise chickens within the pen, an ellipse fitting algorithm was used and the chickens' head and tail removed using the Chan-Vese method. 3. The correlations between the body weight and 6 physical extracted features indicated that there were strong correlations between body weight and the 5 features including area, perimeter, convex area, major and minor axis length. 5. According to statistical analysis there was no significant difference between morning and afternoon data over 42 d. 6. In an attempt to improve the accuracy of live weight approximation different ANN techniques, including Bayesian regulation, Levenberg-Marquardt, Scaled conjugate gradient and gradient descent were used. Bayesian regulation with R 2 value of 0.98 was the best network for prediction of broiler weight. 7. The accuracy of the machine vision technique was examined and most errors were less than 50 g.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1987-08-01
This article reports that there are literally hundreds of machine vision systems from which to choose. They range in cost from $10,000 to $1,000,000. Most have been designed for specific applications; the same systems if used for a different application may fail dismally. How can you avoid wasting money on inferior, useless, or nonexpandable systems. A good reference is the Automated Vision Association in Ann Arbor, Mich., a trade group comprised of North American machine vision manufacturers. Reputable suppliers caution users to do their homework before making an investment. Important considerations include comprehensive details on the objects to be viewed-thatmore » is, quantity, shape, dimension, size, and configuration details; lighting characteristics and variations; component orientation details. Then, what do you expect the system to do-inspect, locate components, aid in robotic vision. Other criteria include system speed and related accuracy and reliability. What are the projected benefits and system paybacks.. Examine primarily paybacks associated with scrap and rework reduction as well as reduced warranty costs.« less
2017-12-21
rank , and computer vision. Machine learning is closely related to (and often overlaps with) computational statistics, which also focuses on...Machine learning is a field of computer science that gives computers the ability to learn without being explicitly programmed.[1] Arthur Samuel...an American pioneer in the field of computer gaming and artificial intelligence, coined the term "Machine Learning " in 1959 while at IBM[2]. Evolved
On-line welding quality inspection system for steel pipe based on machine vision
NASA Astrophysics Data System (ADS)
Yang, Yang
2017-05-01
In recent years, high frequency welding has been widely used in production because of its advantages of simplicity, reliability and high quality. In the production process, how to effectively control the weld penetration welding, ensure full penetration, weld uniform, so as to ensure the welding quality is to solve the problem of the present stage, it is an important research field in the field of welding technology. In this paper, based on the study of some methods of welding inspection, a set of on-line welding quality inspection system based on machine vision is designed.
Rubber hose surface defect detection system based on machine vision
NASA Astrophysics Data System (ADS)
Meng, Fanwu; Ren, Jingrui; Wang, Qi; Zhang, Teng
2018-01-01
As an important part of connecting engine, air filter, engine, cooling system and automobile air-conditioning system, automotive hose is widely used in automobile. Therefore, the determination of the surface quality of the hose is particularly important. This research is based on machine vision technology, using HALCON algorithm for the processing of the hose image, and identifying the surface defects of the hose. In order to improve the detection accuracy of visual system, this paper proposes a method to classify the defects to reduce misjudegment. The experimental results show that the method can detect surface defects accurately.
Machine vision system for automated detection of stained pistachio nuts
NASA Astrophysics Data System (ADS)
Pearson, Tom C.
1995-01-01
A machine vision system was developed to separate stained pistachio nuts, which comprise of about 5% of the California crop, from unstained nuts. The system may be used to reduce labor involved with manual grading or to remove aflatoxin contaminated product from low grade process streams. The system was tested on two different pistachio process streams: the bi- chromatic color sorter reject stream and the small nut shelling stock stream. The system had a minimum overall error rate of 14% for the bi-chromatic sorter reject stream and 15% for the small shelling stock stream.
ERIC Educational Resources Information Center
Wash, Darrel Patrick
1989-01-01
Making a machine seem intelligent is not easy. As a consequence, demand has been rising for computer professionals skilled in artificial intelligence and is likely to continue to go up. These workers develop expert systems and solve the mysteries of machine vision, natural language processing, and neural networks. (Editor)
A learning–based approach to artificial sensory feedback leads to optimal integration
Dadarlat, Maria C.; O’Doherty, Joseph E.; Sabes, Philip N.
2014-01-01
Proprioception—the sense of the body’s position in space—plays an important role in natural movement planning and execution and will likewise be necessary for successful motor prostheses and Brain–Machine Interfaces (BMIs). Here, we demonstrated that monkeys could learn to use an initially unfamiliar multi–channel intracortical microstimulation (ICMS) signal, which provided continuous information about hand position relative to an unseen target, to complete accurate reaches. Furthermore, monkeys combined this artificial signal with vision to form an optimal, minimum–variance estimate of relative hand position. These results demonstrate that a learning–based approach can be used to provide a rich artificial sensory feedback signal, suggesting a new strategy for restoring proprioception to patients using BMIs as well as a powerful new tool for studying the adaptive mechanisms of sensory integration. PMID:25420067
Mobile robots IV; Proceedings of the Meeting, Philadelphia, PA, Nov. 6, 7, 1989
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolfe, W.J.; Chun, W.H.
1990-01-01
The present conference on mobile robot systems discusses high-speed machine perception based on passive sensing, wide-angle optical ranging, three-dimensional path planning for flying/crawling robots, navigation of autonomous mobile intelligence in an unstructured natural environment, mechanical models for the locomotion of a four-articulated-track robot, a rule-based command language for a semiautonomous Mars rover, and a computer model of the structured light vision system for a Mars rover. Also discussed are optical flow and three-dimensional information for navigation, feature-based reasoning trail detection, a symbolic neural-net production system for obstacle avoidance and navigation, intelligent path planning for robot navigation in an unknown environment,more » behaviors from a hierarchical control system, stereoscopic TV systems, the REACT language for autonomous robots, and a man-amplifying exoskeleton.« less
Developing and Validating Practical Eye Metrics for the Sense-Assess-Augment Framework
2015-09-29
Sense-Assess-Augment ( SAA ) Framework. To better close the loop between the human and machine teammates AFRL’s Human Performance Wing and Human...Sense-Assess-Augment ( SAA ) framework, which is designed to sense a suite of physiological signals from the operator, use these signals to assess the...to use psychophysiological measures to improve human-machine teamwork (such as Biocybernetics or Augmented Cognition) the AFRL- SAA research program
Developing Administrative Vision.
ERIC Educational Resources Information Center
Chance, Edward W.
Visionary leadership has emerged as a significant characteristic of high performing school administrators. Vision provides a sense of direction for the school and facilitates accomplishment. Administrators must move from authoritarian and managerial modes of operation to proactive leadership, and maintain a focus on the vision through turmoil and…
2011-11-01
RX-TY-TR-2011-0096-01) develops a novel computer vision sensor based upon the biological vision system of the common housefly , Musca domestica...01 summarizes the development of a novel computer vision sensor based upon the biological vision system of the common housefly , Musca domestica
The New School Year: An Opportunity to Commit to a Shared Vision
ERIC Educational Resources Information Center
Williamson, Ronald
2009-01-01
The evidence is clear. The most effective principals have a clear sense of vision and purpose. They know themselves and their personal ethic. They also recognize the importance of vision to guide their work with teachers and other school personnel. A "clear, strong and collectively held vision" was identified by the North Central Regional…
Computer Vision and Machine Learning for Autonomous Characterization of AM Powder Feedstocks
NASA Astrophysics Data System (ADS)
DeCost, Brian L.; Jain, Harshvardhan; Rollett, Anthony D.; Holm, Elizabeth A.
2017-03-01
By applying computer vision and machine learning methods, we develop a system to characterize powder feedstock materials for metal additive manufacturing (AM). Feature detection and description algorithms are applied to create a microstructural scale image representation that can be used to cluster, compare, and analyze powder micrographs. When applied to eight commercial feedstock powders, the system classifies powder images into the correct material systems with greater than 95% accuracy. The system also identifies both representative and atypical powder images. These results suggest the possibility of measuring variations in powders as a function of processing history, relating microstructural features of powders to properties relevant to their performance in AM processes, and defining objective material standards based on visual images. A significant advantage of the computer vision approach is that it is autonomous, objective, and repeatable.
A forestry application simulation of man-machine techniques for analyzing remotely sensed data
NASA Technical Reports Server (NTRS)
Berkebile, J.; Russell, J.; Lube, B.
1976-01-01
The typical steps in the analysis of remotely sensed data for a forestry applications example are simulated. The example uses numerically-oriented pattern recognition techniques and emphasizes man-machine interaction.
Implementation of a robotic flexible assembly system
NASA Technical Reports Server (NTRS)
Benton, Ronald C.
1987-01-01
As part of the Intelligent Task Automation program, a team developed enabling technologies for programmable, sensory controlled manipulation in unstructured environments. These technologies include 2-D/3-D vision sensing and understanding, force sensing and high speed force control, 2.5-D vision alignment and control, and multiple processor architectures. The subsequent design of a flexible, programmable, sensor controlled robotic assembly system for small electromechanical devices is described using these technologies and ongoing implementation and integration efforts. Using vision, the system picks parts dumped randomly in a tray. Using vision and force control, it performs high speed part mating, in-process monitoring/verification of expected results and autonomous recovery from some errors. It is programmed off line with semiautomatic action planning.
Moody, Daniela I.; Brumby, Steven P.; Rowland, Joel C.; ...
2014-12-09
We present results from an ongoing effort to extend neuromimetic machine vision algorithms to multispectral data using adaptive signal processing combined with compressive sensing and machine learning techniques. Our goal is to develop a robust classification methodology that will allow for automated discretization of the landscape into distinct units based on attributes such as vegetation, surface hydrological properties, and topographic/geomorphic characteristics. We use a Hebbian learning rule to build spectral-textural dictionaries that are tailored for classification. We learn our dictionaries from millions of overlapping multispectral image patches and then use a pursuit search to generate classification features. Land cover labelsmore » are automatically generated using unsupervised clustering of sparse approximations (CoSA). We demonstrate our method on multispectral WorldView-2 data from a coastal plain ecosystem in Barrow, Alaska. We explore learning from both raw multispectral imagery and normalized band difference indices. We explore a quantitative metric to evaluate the spectral properties of the clusters in order to potentially aid in assigning land cover categories to the cluster labels. In this study, our results suggest CoSA is a promising approach to unsupervised land cover classification in high-resolution satellite imagery.« less
Optical character recognition reading aid for the visually impaired.
Grandin, Juan Carlos; Cremaschi, Fabian; Lombardo, Elva; Vitu, Ed; Dujovny, Manuel
2008-06-01
An optical character recognition (OCR) reading machine is a significant help for visually impaired patients. An OCR reading machine is used. This instrument can provide a significant help in order to improve the quality of life of patients with low vision or blindness.
Integrity Determination for Image Rendering Vision Navigation
2016-03-01
identifying an object within a scene, tracking a SIFT feature between frames or matching images and/or features for stereo vision applications. This... object level, either in 2-D or 3-D, versus individual features. There is a breadth of information, largely from the machine vision community...matching or image rendering image correspondence approach is based upon using either 2-D or 3-D object models or templates to perform object detection or
Hong, Deokhwa; Lee, Hyunki; Kim, Min Young; Cho, Hyungsuck; Moon, Jeon Il
2009-07-20
Automatic optical inspection (AOI) for printed circuit board (PCB) assembly plays a very important role in modern electronics manufacturing industries. Well-developed inspection machines in each assembly process are required to ensure the manufacturing quality of the electronics products. However, generally almost all AOI machines are based on 2D image-analysis technology. In this paper, a 3D-measurement-method-based AOI system is proposed consisting of a phase shifting profilometer and a stereo vision system for assembled electronic components on a PCB after component mounting and the reflow process. In this system information from two visual systems is fused to extend the shape measurement range limited by 2pi phase ambiguity of the phase shifting profilometer, and finally to maintain fine measurement resolution and high accuracy of the phase shifting profilometer with the measurement range extended by the stereo vision. The main purpose is to overcome the low inspection reliability problem of 2D-based inspection machines by using 3D information of components. The 3D shape measurement results on PCB-mounted electronic components are shown and compared with results from contact and noncontact 3D measuring machines. Based on a series of experiments, the usefulness of the proposed sensor system and its fusion technique are discussed and analyzed in detail.
a Fully Automated Pipeline for Classification Tasks with AN Application to Remote Sensing
NASA Astrophysics Data System (ADS)
Suzuki, K.; Claesen, M.; Takeda, H.; De Moor, B.
2016-06-01
Nowadays deep learning has been intensively in spotlight owing to its great victories at major competitions, which undeservedly pushed `shallow' machine learning methods, relatively naive/handy algorithms commonly used by industrial engineers, to the background in spite of their facilities such as small requisite amount of time/dataset for training. We, with a practical point of view, utilized shallow learning algorithms to construct a learning pipeline such that operators can utilize machine learning without any special knowledge, expensive computation environment, and a large amount of labelled data. The proposed pipeline automates a whole classification process, namely feature-selection, weighting features and the selection of the most suitable classifier with optimized hyperparameters. The configuration facilitates particle swarm optimization, one of well-known metaheuristic algorithms for the sake of generally fast and fine optimization, which enables us not only to optimize (hyper)parameters but also to determine appropriate features/classifier to the problem, which has conventionally been a priori based on domain knowledge and remained untouched or dealt with naïve algorithms such as grid search. Through experiments with the MNIST and CIFAR-10 datasets, common datasets in computer vision field for character recognition and object recognition problems respectively, our automated learning approach provides high performance considering its simple setting (i.e. non-specialized setting depending on dataset), small amount of training data, and practical learning time. Moreover, compared to deep learning the performance stays robust without almost any modification even with a remote sensing object recognition problem, which in turn indicates that there is a high possibility that our approach contributes to general classification problems.
NASA Astrophysics Data System (ADS)
Brumby, S. P.; Warren, M. S.; Keisler, R.; Chartrand, R.; Skillman, S.; Franco, E.; Kontgis, C.; Moody, D.; Kelton, T.; Mathis, M.
2016-12-01
Cloud computing, combined with recent advances in machine learning for computer vision, is enabling understanding of the world at a scale and at a level of space and time granularity never before feasible. Multi-decadal Earth remote sensing datasets at the petabyte scale (8×10^15 bits) are now available in commercial cloud, and new satellite constellations will generate daily global coverage at a few meters per pixel. Public and commercial satellite observations now provide a wide range of sensor modalities, from traditional visible/infrared to dual-polarity synthetic aperture radar (SAR). This provides the opportunity to build a continuously updated map of the world supporting the academic community and decision-makers in government, finanace and industry. We report on work demonstrating country-scale agricultural forecasting, and global-scale land cover/land, use mapping using a range of public and commercial satellite imagery. We describe processing over a petabyte of compressed raw data from 2.8 quadrillion pixels (2.8 petapixels) acquired by the US Landsat and MODIS programs over the past 40 years. Using commodity cloud computing resources, we convert the imagery to a calibrated, georeferenced, multiresolution tiled format suited for machine-learning analysis. We believe ours is the first application to process, in less than a day, on generally available resources, over a petabyte of scientific image data. We report on work combining this imagery with time-series SAR collected by ESA Sentinel 1. We report on work using this reprocessed dataset for experiments demonstrating country-scale food production monitoring, an indicator for famine early warning. We apply remote sensing science and machine learning algorithms to detect and classify agricultural crops and then estimate crop yields and detect threats to food security (e.g., flooding, drought). The software platform and analysis methodology also support monitoring water resources, forests and other general indicators of environmental health, and can detect growth and changes in cities that are displacing historical agricultural zones.
NASA Technical Reports Server (NTRS)
Morrison, D. B. (Editor); Scherer, D. J.
1977-01-01
Papers are presented on a variety of techniques for the machine processing of remotely sensed data. Consideration is given to preprocessing methods such as the correction of Landsat data for the effects of haze, sun angle, and reflectance and to the maximum likelihood estimation of signature transformation algorithm. Several applications of machine processing to agriculture are identified. Various types of processing systems are discussed such as ground-data processing/support systems for sensor systems and the transfer of remotely sensed data to operational systems. The application of machine processing to hydrology, geology, and land-use mapping is outlined. Data analysis is considered with reference to several types of classification methods and systems.
Malik, Raza Naseem; Cote, Rachel; Lam, Tania
2017-01-01
Skilled walking, such as obstacle crossing, is an essential component of functional mobility. Sensorimotor integration of visual and proprioceptive inputs is important for successful obstacle crossing. The objective of this study was to understand how proprioceptive deficits affect obstacle-crossing strategies when controlling for variations in motor deficits in ambulatory individuals with spinal cord injury (SCI). Fifteen ambulatory individuals with SCI and 15 able-bodied controls were asked to step over an obstacle scaled to their motor abilities under full and obstructed vision conditions. An eye tracker was used to determine gaze behaviour and motion capture analysis was used to determine toe kinematics relative to the obstacle. Combined, bilateral hip and knee proprioceptive sense (joint position sense and movement detection sense) was assessed using the Lokomat and customized software controls. Combined, bilateral hip and knee proprioceptive sense in subjects with SCI varied and was significantly different from able-bodied subjects. Subjects with greater proprioceptive deficits stepped higher over the obstacle with their lead and trail limbs in the obstructed vision condition compared with full vision. Subjects with SCI also glanced at the obstacle more frequently and with longer fixation times compared with controls, but this was not related to proprioceptive sense. This study indicates that ambulatory individuals with SCI rely more heavily on vision to cross obstacles and show impairments in key gait parameters required for successful obstacle crossing. Our data suggest that proprioceptive deficits need to be considered in rehabilitation programs aimed at improving functional mobility in ambulatory individuals with SCI. This work is unique since it examines the contribution of combined, bilateral hip and knee proprioceptive sense on the recovery of skilled walking function, in addition to characterizing gaze behavior during a skilled walking task in people with motor-incomplete spinal cord injury. Copyright © 2017 the American Physiological Society.
Vision-Based People Detection System for Heavy Machine Applications
Fremont, Vincent; Bui, Manh Tuan; Boukerroui, Djamal; Letort, Pierrick
2016-01-01
This paper presents a vision-based people detection system for improving safety in heavy machines. We propose a perception system composed of a monocular fisheye camera and a LiDAR. Fisheye cameras have the advantage of a wide field-of-view, but the strong distortions that they create must be handled at the detection stage. Since people detection in fisheye images has not been well studied, we focus on investigating and quantifying the impact that strong radial distortions have on the appearance of people, and we propose approaches for handling this specificity, adapted from state-of-the-art people detection approaches. These adaptive approaches nevertheless have the drawback of high computational cost and complexity. Consequently, we also present a framework for harnessing the LiDAR modality in order to enhance the detection algorithm for different camera positions. A sequential LiDAR-based fusion architecture is used, which addresses directly the problem of reducing false detections and computational cost in an exclusively vision-based system. A heavy machine dataset was built, and different experiments were carried out to evaluate the performance of the system. The results are promising, in terms of both processing speed and performance. PMID:26805838
Vision-Based People Detection System for Heavy Machine Applications.
Fremont, Vincent; Bui, Manh Tuan; Boukerroui, Djamal; Letort, Pierrick
2016-01-20
This paper presents a vision-based people detection system for improving safety in heavy machines. We propose a perception system composed of a monocular fisheye camera and a LiDAR. Fisheye cameras have the advantage of a wide field-of-view, but the strong distortions that they create must be handled at the detection stage. Since people detection in fisheye images has not been well studied, we focus on investigating and quantifying the impact that strong radial distortions have on the appearance of people, and we propose approaches for handling this specificity, adapted from state-of-the-art people detection approaches. These adaptive approaches nevertheless have the drawback of high computational cost and complexity. Consequently, we also present a framework for harnessing the LiDAR modality in order to enhance the detection algorithm for different camera positions. A sequential LiDAR-based fusion architecture is used, which addresses directly the problem of reducing false detections and computational cost in an exclusively vision-based system. A heavy machine dataset was built, and different experiments were carried out to evaluate the performance of the system. The results are promising, in terms of both processing speed and performance.
Shi, Ce; Qian, Jianping; Han, Shuai; Fan, Beilei; Yang, Xinting; Wu, Xiaoming
2018-03-15
The study assessed the feasibility of developing a machine vision system based on pupil and gill color changes in tilapia for simultaneous prediction of total volatile basic nitrogen (TVB-N), thiobarbituric acid (TBA) and total viable counts (TVC) during storage at 4°C. The pupils and gills were chosen and color space conversion among RGB, HSI and L ∗ a ∗ b ∗ color spaces was performed automatically by an image processing algorithm. Multiple regression models were established by correlating pupil and gill color parameters with TVB-N, TVC and TBA (R 2 =0.989-0.999). However, assessment of freshness based on gill color is destructive and time-consuming because gill cover must be removed before images are captured. Finally, visualization maps of spoilage based on pupil color were achieved using image algorithms. The results show that assessment of tilapia pupil color parameters using machine vision can be used as a low-cost, on-line method for predicting freshness during 4°C storage. Copyright © 2017 Elsevier Ltd. All rights reserved.
Automatic decoding of facial movements reveals deceptive pain expressions
Bartlett, Marian Stewart; Littlewort, Gwen C.; Frank, Mark G.; Lee, Kang
2014-01-01
Summary In highly social species such as humans, faces have evolved to convey rich information for social interaction, including expressions of emotions and pain [1–3]. Two motor pathways control facial movement [4–7]. A subcortical extrapyramidal motor system drives spontaneous facial expressions of felt emotions. A cortical pyramidal motor system controls voluntary facial expressions. The pyramidal system enables humans to simulate facial expressions of emotions not actually experienced. Their simulation is so successful that they can deceive most observers [8–11]. Machine vision may, however, be able to distinguish deceptive from genuine facial signals by identifying the subtle differences between pyramidally and extrapyramidally driven movements. Here we show that human observers could not discriminate real from faked expressions of pain better than chance, and after training, improved accuracy to a modest 55%. However a computer vision system that automatically measures facial movements and performs pattern recognition on those movements attained 85% accuracy. The machine system’s superiority is attributable to its ability to differentiate the dynamics of genuine from faked expressions. Thus by revealing the dynamics of facial action through machine vision systems, our approach has the potential to elucidate behavioral fingerprints of neural control systems involved in emotional signaling. PMID:24656830
Greiff, Kirsti; Mathiassen, John Reidar; Misimi, Ekrem; Hersleth, Margrethe; Aursand, Ida G.
2015-01-01
The European diet today generally contains too much sodium (Na+). A partial substitution of NaCl by KCl has shown to be a promising method for reducing sodium content. The aim of this work was to investigate the sensorial changes of cooked ham with reduced sodium content. Traditional sensorial evaluation and objective multimodal machine vision were used. The salt content in the hams was decreased from 3.4% to 1.4%, and 25% of the Na+ was replaced by K+. The salt reduction had highest influence on the sensory attributes salty taste, after taste, tenderness, hardness and color hue. The multimodal machine vision system showed changes in lightness, as a function of reduced salt content. Compared to the reference ham (3.4% salt), a replacement of Na+-ions by K+-ions of 25% gave no significant changes in WHC, moisture, pH, expressed moisture, the sensory profile attributes or the surface lightness and shininess. A further reduction of salt down to 1.7–1.4% salt, led to a decrease in WHC and an increase in expressible moisture. PMID:26422367
Greiff, Kirsti; Mathiassen, John Reidar; Misimi, Ekrem; Hersleth, Margrethe; Aursand, Ida G
2015-01-01
The European diet today generally contains too much sodium (Na(+)). A partial substitution of NaCl by KCl has shown to be a promising method for reducing sodium content. The aim of this work was to investigate the sensorial changes of cooked ham with reduced sodium content. Traditional sensorial evaluation and objective multimodal machine vision were used. The salt content in the hams was decreased from 3.4% to 1.4%, and 25% of the Na(+) was replaced by K(+). The salt reduction had highest influence on the sensory attributes salty taste, after taste, tenderness, hardness and color hue. The multimodal machine vision system showed changes in lightness, as a function of reduced salt content. Compared to the reference ham (3.4% salt), a replacement of Na(+)-ions by K(+)-ions of 25% gave no significant changes in WHC, moisture, pH, expressed moisture, the sensory profile attributes or the surface lightness and shininess. A further reduction of salt down to 1.7-1.4% salt, led to a decrease in WHC and an increase in expressible moisture.
NASA Astrophysics Data System (ADS)
van Rossum, Anne C.; Lin, Hai Xiang; Dubbeldam, Johan; van der Herik, H. Jaap
2018-04-01
In machine vision typical heuristic methods to extract parameterized objects out of raw data points are the Hough transform and RANSAC. Bayesian models carry the promise to optimally extract such parameterized objects given a correct definition of the model and the type of noise at hand. A category of solvers for Bayesian models are Markov chain Monte Carlo methods. Naive implementations of MCMC methods suffer from slow convergence in machine vision due to the complexity of the parameter space. Towards this blocked Gibbs and split-merge samplers have been developed that assign multiple data points to clusters at once. In this paper we introduce a new split-merge sampler, the triadic split-merge sampler, that perform steps between two and three randomly chosen clusters. This has two advantages. First, it reduces the asymmetry between the split and merge steps. Second, it is able to propose a new cluster that is composed out of data points from two different clusters. Both advantages speed up convergence which we demonstrate on a line extraction problem. We show that the triadic split-merge sampler outperforms the conventional split-merge sampler. Although this new MCMC sampler is demonstrated in this machine vision context, its application extend to the very general domain of statistical inference.
Machine Vision Within The Framework Of Collective Neural Assemblies
NASA Astrophysics Data System (ADS)
Gupta, Madan M.; Knopf, George K.
1990-03-01
The proposed mechanism for designing a robust machine vision system is based on the dynamic activity generated by the various neural populations embedded in nervous tissue. It is postulated that a hierarchy of anatomically distinct tissue regions are involved in visual sensory information processing. Each region may be represented as a planar sheet of densely interconnected neural circuits. Spatially localized aggregates of these circuits represent collective neural assemblies. Four dynamically coupled neural populations are assumed to exist within each assembly. In this paper we present a state-variable model for a tissue sheet derived from empirical studies of population dynamics. Each population is modelled as a nonlinear second-order system. It is possible to emulate certain observed physiological and psychophysiological phenomena of biological vision by properly programming the interconnective gains . Important early visual phenomena such as temporal and spatial noise insensitivity, contrast sensitivity and edge enhancement will be discussed for a one-dimensional tissue model.
Vision technology/algorithms for space robotics applications
NASA Technical Reports Server (NTRS)
Krishen, Kumar; Defigueiredo, Rui J. P.
1987-01-01
The thrust of automation and robotics for space applications has been proposed for increased productivity, improved reliability, increased flexibility, higher safety, and for the performance of automating time-consuming tasks, increasing productivity/performance of crew-accomplished tasks, and performing tasks beyond the capability of the crew. This paper provides a review of efforts currently in progress in the area of robotic vision. Both systems and algorithms are discussed. The evolution of future vision/sensing is projected to include the fusion of multisensors ranging from microwave to optical with multimode capability to include position, attitude, recognition, and motion parameters. The key feature of the overall system design will be small size and weight, fast signal processing, robust algorithms, and accurate parameter determination. These aspects of vision/sensing are also discussed.
A robust embedded vision system feasible white balance algorithm
NASA Astrophysics Data System (ADS)
Wang, Yuan; Yu, Feihong
2018-01-01
White balance is a very important part of the color image processing pipeline. In order to meet the need of efficiency and accuracy in embedded machine vision processing system, an efficient and robust white balance algorithm combining several classical ones is proposed. The proposed algorithm mainly has three parts. Firstly, in order to guarantee higher efficiency, an initial parameter calculated from the statistics of R, G and B components from raw data is used to initialize the following iterative method. After that, the bilinear interpolation algorithm is utilized to implement demosaicing procedure. Finally, an adaptive step adjustable scheme is introduced to ensure the controllability and robustness of the algorithm. In order to verify the proposed algorithm's performance on embedded vision system, a smart camera based on IMX6 DualLite, IMX291 and XC6130 is designed. Extensive experiments on a large amount of images under different color temperatures and exposure conditions illustrate that the proposed white balance algorithm avoids color deviation problem effectively, achieves a good balance between efficiency and quality, and is suitable for embedded machine vision processing system.
Feature-based three-dimensional registration for repetitive geometry in machine vision
Gong, Yuanzheng; Seibel, Eric J.
2016-01-01
As an important step in three-dimensional (3D) machine vision, 3D registration is a process of aligning two or multiple 3D point clouds that are collected from different perspectives together into a complete one. The most popular approach to register point clouds is to minimize the difference between these point clouds iteratively by Iterative Closest Point (ICP) algorithm. However, ICP does not work well for repetitive geometries. To solve this problem, a feature-based 3D registration algorithm is proposed to align the point clouds that are generated by vision-based 3D reconstruction. By utilizing texture information of the object and the robustness of image features, 3D correspondences can be retrieved so that the 3D registration of two point clouds is to solve a rigid transformation. The comparison of our method and different ICP algorithms demonstrates that our proposed algorithm is more accurate, efficient and robust for repetitive geometry registration. Moreover, this method can also be used to solve high depth uncertainty problem caused by little camera baseline in vision-based 3D reconstruction. PMID:28286703
Landmark navigation and autonomous landing approach with obstacle detection for aircraft
NASA Astrophysics Data System (ADS)
Fuerst, Simon; Werner, Stefan; Dickmanns, Dirk; Dickmanns, Ernst D.
1997-06-01
A machine perception system for aircraft and helicopters using multiple sensor data for state estimation is presented. By combining conventional aircraft sensor like gyros, accelerometers, artificial horizon, aerodynamic measuring devices and GPS with vision data taken by conventional CCD-cameras mounted on a pan and tilt platform, the position of the craft can be determined as well as the relative position to runways and natural landmarks. The vision data of natural landmarks are used to improve position estimates during autonomous missions. A built-in landmark management module decides which landmark should be focused on by the vision system, depending on the distance to the landmark and the aspect conditions. More complex landmarks like runways are modeled with different levels of detail that are activated dependent on range. A supervisor process compares vision data and GPS data to detect mistracking of the vision system e.g. due to poor visibility and tries to reinitialize the vision system or to set focus on another landmark available. During landing approach obstacles like trucks and airplanes can be detected on the runway. The system has been tested in real-time within a hardware-in-the-loop simulation. Simulated aircraft measurements corrupted by noise and other characteristic sensor errors have been fed into the machine perception system; the image processing module for relative state estimation was driven by computer generated imagery. Results from real-time simulation runs are given.
SPACE: Vision and Reality: Face to Face. Proceedings Report
NASA Technical Reports Server (NTRS)
1995-01-01
The proceedings of the 11th National Space Symposium entitled 'Vision and Reality: Face to Face' is presented. Technological areas discussed include the following sections: Vision for the future; Positioning for the future; Remote sensing, the emerging era; space opportunities, Competitive vision with acquisition reality; National security requirements in space; The world is into space; and The outlook for space. An appendice is also attached.
Semi-automated Digital Imaging and Processing System for Measuring Lake Ice Thickness
NASA Astrophysics Data System (ADS)
Singh, Preetpal
Canada is home to thousands of freshwater lakes and rivers. Apart from being sources of infinite natural beauty, rivers and lakes are an important source of water, food and transportation. The northern hemisphere of Canada experiences extreme cold temperatures in the winter resulting in a freeze up of regional lakes and rivers. Frozen lakes and rivers tend to offer unique opportunities in terms of wildlife harvesting and winter transportation. Ice roads built on frozen rivers and lakes are vital supply lines for industrial operations in the remote north. Monitoring the ice freeze-up and break-up dates annually can help predict regional climatic changes. Lake ice impacts a variety of physical, ecological and economic processes. The construction and maintenance of a winter road can cost millions of dollars annually. A good understanding of ice mechanics is required to build and deem an ice road safe. A crucial factor in calculating load bearing capacity of ice sheets is the thickness of ice. Construction costs are mainly attributed to producing and maintaining a specific thickness and density of ice that can support different loads. Climate change is leading to warmer temperatures causing the ice to thin faster. At a certain point, a winter road may not be thick enough to support travel and transportation. There is considerable interest in monitoring winter road conditions given the high construction and maintenance costs involved. Remote sensing technologies such as Synthetic Aperture Radar have been successfully utilized to study the extent of ice covers and record freeze-up and break-up dates of ice on lakes and rivers across the north. Ice road builders often used Ultrasound equipment to measure ice thickness. However, an automated monitoring system, based on machine vision and image processing technology, which can measure ice thickness on lakes has not been thought of. Machine vision and image processing techniques have successfully been used in manufacturing to detect equipment failure and identify defective products at the assembly line. The research work in this thesis combines machine vision and image processing technology to build a digital imaging and processing system for monitoring and measuring lake ice thickness in real time. An ultra-compact USB camera is programmed to acquire and transmit high resolution imagery for processing with MATLAB Image Processing toolbox. The image acquisition and transmission process is fully automated; image analysis is semi-automated and requires limited user input. Potential design changes to the prototype and ideas on fully automating the imaging and processing procedure are presented to conclude this research work.
Galileo's eye: a new vision of the senses in the work of Galileo Galilei.
Piccolino, Marco; Wade, Nicholas J
2008-01-01
Reflections on the senses, and particularly on vision, permeate the writings of Galileo Galilei, one of the main protagonists of the scientific revolution. This aspect of his work has received scant attention by historians, in spite of its importance for his achievements in astronomy, and also for the significance in the innovative scientific methodology he fostered. Galileo's vision pursued a different path from the main stream of the then contemporary studies in the field; these were concerned with the dioptrics and anatomy of the eye, as elaborated mainly by Johannes Kepler and Christoph Scheiner. Galileo was more concerned with the phenomenology rather than with the mechanisms of the visual process. His general interest in the senses was psychological and philosophical; it reflected the fallacies and limits of the senses and the ways in which scientific knowledge of the world could be gathered from potentially deceptive appearances. Galileo's innovative conception of the relation between the senses and external reality contrasted with the classical tradition dominated by Aristotle; it paved the way for the modern understanding of sensory processing, culminating two centuries later in Johannes Müller's elaboration of the doctrine of specific nerve energies and in Helmholtz's general theory of perception.
Some examples of image warping for low vision prosthesis
NASA Technical Reports Server (NTRS)
Juday, Richard D.; Loshin, David S.
1988-01-01
NASA has developed an image processor, the Programmable Remapper, for certain functions in machine vision. The Remapper performs a highly arbitrary geometric warping of an image at video rate. It might ultimately be shrunk to a size and cost that could allow its use in a low-vision prosthesis. Coordinate warpings have been developed for retinitis pigmentosa (tunnel vision) and for maculapathy (loss of central field) that are intended to make best use of the patient's remaining viable retina. The rationales and mathematics are presented for some warpings that we will try in clinical studies using the Remapper's prototype.
Using deep learning in image hyper spectral segmentation, classification, and detection
NASA Astrophysics Data System (ADS)
Zhao, Xiuying; Su, Zhenyu
2018-02-01
Recent years have shown that deep learning neural networks are a valuable tool in the field of computer vision. Deep learning method can be used in applications like remote sensing such as Land cover Classification, Detection of Vehicle in Satellite Images, Hyper spectral Image classification. This paper addresses the use of the deep learning artificial neural network in Satellite image segmentation. Image segmentation plays an important role in image processing. The hue of the remote sensing image often has a large hue difference, which will result in the poor display of the images in the VR environment. Image segmentation is a pre processing technique applied to the original images and splits the image into many parts which have different hue to unify the color. Several computational models based on supervised, unsupervised, parametric, probabilistic region based image segmentation techniques have been proposed. Recently, one of the machine learning technique known as, deep learning with convolution neural network has been widely used for development of efficient and automatic image segmentation models. In this paper, we focus on study of deep neural convolution network and its variants for automatic image segmentation rather than traditional image segmentation strategies.
Image-algebraic design of multispectral target recognition algorithms
NASA Astrophysics Data System (ADS)
Schmalz, Mark S.; Ritter, Gerhard X.
1994-06-01
In this paper, we discuss methods for multispectral ATR (Automated Target Recognition) of small targets that are sensed under suboptimal conditions, such as haze, smoke, and low light levels. In particular, we discuss our ongoing development of algorithms and software that effect intelligent object recognition by selecting ATR filter parameters according to ambient conditions. Our algorithms are expressed in terms of IA (image algebra), a concise, rigorous notation that unifies linear and nonlinear mathematics in the image processing domain. IA has been implemented on a variety of parallel computers, with preprocessors available for the Ada and FORTRAN languages. An image algebra C++ class library has recently been made available. Thus, our algorithms are both feasible implementationally and portable to numerous machines. Analyses emphasize the aspects of image algebra that aid the design of multispectral vision algorithms, such as parameterized templates that facilitate the flexible specification of ATR filters.
A Review of Imaging Techniques for Plant Phenotyping
Li, Lei; Zhang, Qin; Huang, Danfeng
2014-01-01
Given the rapid development of plant genomic technologies, a lack of access to plant phenotyping capabilities limits our ability to dissect the genetics of quantitative traits. Effective, high-throughput phenotyping platforms have recently been developed to solve this problem. In high-throughput phenotyping platforms, a variety of imaging methodologies are being used to collect data for quantitative studies of complex traits related to the growth, yield and adaptation to biotic or abiotic stress (disease, insects, drought and salinity). These imaging techniques include visible imaging (machine vision), imaging spectroscopy (multispectral and hyperspectral remote sensing), thermal infrared imaging, fluorescence imaging, 3D imaging and tomographic imaging (MRT, PET and CT). This paper presents a brief review on these imaging techniques and their applications in plant phenotyping. The features used to apply these imaging techniques to plant phenotyping are described and discussed in this review. PMID:25347588
Trends in communicative access solutions for children with cerebral palsy.
Myrden, Andrew; Schudlo, Larissa; Weyand, Sabine; Zeyl, Timothy; Chau, Tom
2014-08-01
Access solutions may facilitate communication in children with limited functional speech and motor control. This study reviews current trends in access solution development for children with cerebral palsy, with particular emphasis on the access technology that harnesses a control signal from the user (eg, movement or physiological change) and the output device (eg, augmentative and alternative communication system) whose behavior is modulated by the user's control signal. Access technologies have advanced from simple mechanical switches to machine vision (eg, eye-gaze trackers), inertial sensing, and emerging physiological interfaces that require minimal physical effort. Similarly, output devices have evolved from bulky, dedicated hardware with limited configurability, to platform-agnostic, highly personalized mobile applications. Emerging case studies encourage the consideration of access technology for all nonverbal children with cerebral palsy with at least nascent contingency awareness. However, establishing robust evidence of the effectiveness of the aforementioned advances will require more expansive studies. © The Author(s) 2014.
A scale-invariant change detection method for land use/cover change research
NASA Astrophysics Data System (ADS)
Xing, Jin; Sieber, Renee; Caelli, Terrence
2018-07-01
Land Use/Cover Change (LUCC) detection relies increasingly on comparing remote sensing images with different spatial and spectral scales. Based on scale-invariant image analysis algorithms in computer vision, we propose a scale-invariant LUCC detection method to identify changes from scale heterogeneous images. This method is composed of an entropy-based spatial decomposition, two scale-invariant feature extraction methods, Maximally Stable Extremal Region (MSER) and Scale-Invariant Feature Transformation (SIFT) algorithms, a spatial regression voting method to integrate MSER and SIFT results, a Markov Random Field-based smoothing method, and a support vector machine classification method to assign LUCC labels. We test the scale invariance of our new method with a LUCC case study in Montreal, Canada, 2005-2012. We found that the scale-invariant LUCC detection method provides similar accuracy compared with the resampling-based approach but this method avoids the LUCC distortion incurred by resampling.
Multi-electrode stimulation in somatosensory cortex increases probability of detection
NASA Astrophysics Data System (ADS)
Zaaimi, Boubker; Ruiz-Torres, Ricardo; Solla, Sara A.; Miller, Lee E.
2013-10-01
Objective. Brain machine interfaces (BMIs) that decode control signals from motor cortex have developed tremendously in the past decade, but virtually all rely exclusively on vision to provide feedback. There is now increasing interest in developing an afferent interface to replace natural somatosensation, much as the cochlear implant has done for the sense of hearing. Preliminary experiments toward a somatosensory neuroprosthesis have mostly addressed the sense of touch, but proprioception, the sense of limb position and movement, is also critical for the control of movement. However, proprioceptive areas of cortex lack the precise somatotopy of tactile areas. We showed previously that there is only a weak tendency for neighboring neurons in area 2 to signal similar directions of hand movement. Consequently, stimulation with the relatively large currents used in many studies is likely to activate a rather heterogeneous set of neurons. Approach. Here, we have compared the effect of single-electrode stimulation at subthreshold levels to the effect of stimulating as many as seven electrodes in combination. Main results. We found a mean enhancement in the sensitivity to the stimulus (d‧) of 0.17 for pairs compared to individual electrodes (an increase of roughly 30%), and an increase of 2.5 for groups of seven electrodes (260%). Significance. We propose that a proprioceptive interface made up of several hundred electrodes may yield safer, more effective sensation than a BMI using fewer electrodes and larger currents.
Community identities as visions for landscape change
William P. Stewart; Derek Liebert; Kevin W. Larkin
2004-01-01
Residents' felt senses of their community can play substantial roles in determining visions for landscape change. Community identities are often anchored in tangible environments and events of a community, and have the potential to serve as visions for landscape planning processes. Photo-elicitation is applied in this study to connect community-based meanings to...
Using an auditory sensory substitution device to augment vision: evidence from eye movements.
Wright, Thomas D; Margolis, Aaron; Ward, Jamie
2015-03-01
Sensory substitution devices convert information normally associated with one sense into another sense (e.g. converting vision into sound). This is often done to compensate for an impaired sense. The present research uses a multimodal approach in which both natural vision and sound-from-vision ('soundscapes') are simultaneously presented. Although there is a systematic correspondence between what is seen and what is heard, we introduce a local discrepancy between the signals (the presence of a target object that is heard but not seen) that the participant is required to locate. In addition to behavioural responses, the participants' gaze is monitored with eye-tracking. Although the target object is only presented in the auditory channel, behavioural performance is enhanced when visual information relating to the non-target background is presented. In this instance, vision may be used to generate predictions about the soundscape that enhances the ability to detect the hidden auditory object. The eye-tracking data reveal that participants look for longer in the quadrant containing the auditory target even when they subsequently judge it to be located elsewhere. As such, eye movements generated by soundscapes reveal the knowledge of the target location that does not necessarily correspond to the actual judgment made. The results provide a proof of principle that multimodal sensory substitution may be of benefit to visually impaired people with some residual vision and, in normally sighted participants, for guiding search within complex scenes.
USDA-ARS?s Scientific Manuscript database
The overall objective of this research was to develop an in-field presorting and grading system to separate undersized and defective fruit from fresh market-grade apples. To achieve this goal, a cost-effective machine vision inspection prototype was built, which consisted of a low-cost color camera,...
Scanning System -- Technology Worth a Look
Philip A. Araman; Daniel L. Schmoldt; Richard W. Conners; D. Earl Kline
1995-01-01
In an effort to help automate the inspection for lumber defects, optical scanning systems are emerging as an alternative to the human eye. Although still in its infancy, scanning technology is being explored by machine companies and universities. This article was excerpted from "Machine Vision Systems for Grading and Processing Hardwood Lumber," by Philip...
Biological Basis For Computer Vision: Some Perspectives
NASA Astrophysics Data System (ADS)
Gupta, Madan M.
1990-03-01
Using biology as a basis for the development of sensors, devices and computer vision systems is a challenge to systems and vision scientists. It is also a field of promising research for engineering applications. Biological sensory systems, such as vision, touch and hearing, sense different physical phenomena from our environment, yet they possess some common mathematical functions. These mathematical functions are cast into the neural layers which are distributed throughout our sensory regions, sensory information transmission channels and in the cortex, the centre of perception. In this paper, we are concerned with the study of the biological vision system and the emulation of some of its mathematical functions, both retinal and visual cortex, for the development of a robust computer vision system. This field of research is not only intriguing, but offers a great challenge to systems scientists in the development of functional algorithms. These functional algorithms can be generalized for further studies in such fields as signal processing, control systems and image processing. Our studies are heavily dependent on the the use of fuzzy - neural layers and generalized receptive fields. Building blocks of such neural layers and receptive fields may lead to the design of better sensors and better computer vision systems. It is hoped that these studies will lead to the development of better artificial vision systems with various applications to vision prosthesis for the blind, robotic vision, medical imaging, medical sensors, industrial automation, remote sensing, space stations and ocean exploration.
Linear- and Repetitive Feature Detection Within Remotely Sensed Imagery
2017-04-01
applicable to Python or other pro- gramming languages with image- processing capabilities. 4.1 Classification machine learning The first methodology uses...remotely sensed images that are in panchromatic or true-color formats. Image- processing techniques, in- cluding Hough transforms, machine learning, and...data fusion .................................................................................................... 44 6.3 Context-based processing
Robotic Quantification of Position Sense in Children With Perinatal Stroke.
Kuczynski, Andrea M; Dukelow, Sean P; Semrau, Jennifer A; Kirton, Adam
2016-09-01
Background Perinatal stroke is the leading cause of hemiparetic cerebral palsy. Motor deficits and their treatment are commonly emphasized in the literature. Sensory dysfunction may be an important contributor to disability, but it is difficult to measure accurately clinically. Objective Use robotics to quantify position sense deficits in hemiparetic children with perinatal stroke and determine their association with common clinical measures. Methods Case-control study. Participants were children aged 6 to 19 years with magnetic resonance imaging-confirmed unilateral perinatal arterial ischemic stroke or periventricular venous infarction and symptomatic hemiparetic cerebral palsy. Participants completed a position matching task using an exoskeleton robotic device (KINARM). Position matching variability, shift, and expansion/contraction area were measured with and without vision. Robotic outcomes were compared across stroke groups and controls and to clinical measures of disability (Assisting Hand Assessment) and sensory function. Results Forty stroke participants (22 arterial, 18 venous, median age 12 years, 43% female) were compared with 60 healthy controls. Position sense variability was impaired in arterial (6.01 ± 1.8 cm) and venous (5.42 ± 1.8 cm) stroke compared to controls (3.54 ± 0.9 cm, P < .001) with vision occluded. Impairment remained when vision was restored. Robotic measures correlated with functional disability. Sensitivity and specificity of clinical sensory tests were modest. Conclusions Robotic assessment of position sense is feasible in children with perinatal stroke. Impairment is common and worse in arterial lesions. Limited correction with vision suggests cortical sensory network dysfunction. Disordered position sense may represent a therapeutic target in hemiparetic cerebral palsy. © The Author(s) 2016.
The Mind's Eye: Reading from "Scientific American."
ERIC Educational Resources Information Center
Scientific American, Inc., New York, NY.
Understanding vision is not a simple task. Nevertheless, a great deal is known about vision, more than about any of our other senses. The articles collected in this volume were chosen and organized with the intention of providing a survey of a number of different areas of vision research. Three major sections focus on the general categories of…
Visionary Leadership: Creating a Compelling Sense of Direction for Your Organization.
ERIC Educational Resources Information Center
Nanus, Burt
This book on institutional leadership and in particular on how to develop a leadership style that springs from a shared vision of an organization's future, or "visionary leadership," contains nine chapters divided into three parts. Part 1, "What Vision is and Why It Matters," contains two chapters, which define vision, explain why it is important…
In-process fault detection for textile fabric production: onloom imaging
NASA Astrophysics Data System (ADS)
Neumann, Florian; Holtermann, Timm; Schneider, Dorian; Kulczycki, Ashley; Gries, Thomas; Aach, Til
2011-05-01
Constant and traceable high fabric quality is of high importance both for technical and for high-quality conventional fabrics. Usually, quality inspection is carried out by trained personal, whose detection rate and maximum period of concentration are limited. Low resolution automated fabric inspection machines using texture analysis were developed. Since 2003, systems for the in-process inspection on weaving machines ("onloom") are commercially available. With these defects can be detected, but not measured quantitative precisely. Most systems are also prone to inevitable machine vibrations. Feedback loops for fault prevention are not established. Technology has evolved since 2003: Camera and computer prices dropped, resolutions were enhanced, recording speeds increased. These are the preconditions for real-time processing of high-resolution images. So far, these new technological achievements are not used in textile fabric production. For efficient use, a measurement system must be integrated into the weaving process; new algorithms for defect detection and measurement must be developed. The goal of the joint project is the development of a modern machine vision system for nondestructive onloom fabric inspection. The system consists of a vibration-resistant machine integration, a high-resolution machine vision system, and new, reliable, and robust algorithms with quality database for defect documentation. The system is meant to detect, measure, and classify at least 80 % of economically relevant defects. Concepts for feedback loops into the weaving process will be pointed out.
Precise positioning method for multi-process connecting based on binocular vision
NASA Astrophysics Data System (ADS)
Liu, Wei; Ding, Lichao; Zhao, Kai; Li, Xiao; Wang, Ling; Jia, Zhenyuan
2016-01-01
With the rapid development of aviation and aerospace, the demand for metal coating parts such as antenna reflector, eddy-current sensor and signal transmitter, etc. is more and more urgent. Such parts with varied feature dimensions, complex three-dimensional structures, and high geometric accuracy are generally fabricated by the combination of different manufacturing technology. However, it is difficult to ensure the machining precision because of the connection error between different processing methods. Therefore, a precise positioning method is proposed based on binocular micro stereo vision in this paper. Firstly, a novel and efficient camera calibration method for stereoscopic microscope is presented to solve the problems of narrow view field, small depth of focus and too many nonlinear distortions. Secondly, the extraction algorithms for law curve and free curve are given, and the spatial position relationship between the micro vision system and the machining system is determined accurately. Thirdly, a precise positioning system based on micro stereovision is set up and then embedded in a CNC machining experiment platform. Finally, the verification experiment of the positioning accuracy is conducted and the experimental results indicated that the average errors of the proposed method in the X and Y directions are 2.250 μm and 1.777 μm, respectively.
A Vision-Aided 3D Path Teaching Method before Narrow Butt Joint Welding
Zeng, Jinle; Chang, Baohua; Du, Dong; Peng, Guodong; Chang, Shuhe; Hong, Yuxiang; Wang, Li; Shan, Jiguo
2017-01-01
For better welding quality, accurate path teaching for actuators must be achieved before welding. Due to machining errors, assembly errors, deformations, etc., the actual groove position may be different from the predetermined path. Therefore, it is significant to recognize the actual groove position using machine vision methods and perform an accurate path teaching process. However, during the teaching process of a narrow butt joint, the existing machine vision methods may fail because of poor adaptability, low resolution, and lack of 3D information. This paper proposes a 3D path teaching method for narrow butt joint welding. This method obtains two kinds of visual information nearly at the same time, namely 2D pixel coordinates of the groove in uniform lighting condition and 3D point cloud data of the workpiece surface in cross-line laser lighting condition. The 3D position and pose between the welding torch and groove can be calculated after information fusion. The image resolution can reach 12.5 μm. Experiments are carried out at an actuator speed of 2300 mm/min and groove width of less than 0.1 mm. The results show that this method is suitable for groove recognition before narrow butt joint welding and can be applied in path teaching fields of 3D complex components. PMID:28492481
Applications of color machine vision in the agricultural and food industries
NASA Astrophysics Data System (ADS)
Zhang, Min; Ludas, Laszlo I.; Morgan, Mark T.; Krutz, Gary W.; Precetti, Cyrille J.
1999-01-01
Color is an important factor in Agricultural and the Food Industry. Agricultural or prepared food products are often grade by producers and consumers using color parameters. Color is used to estimate maturity, sort produce for defects, but also perform genetic screenings or make an aesthetic judgement. The task of sorting produce following a color scale is very complex, requires special illumination and training. Also, this task cannot be performed for long durations without fatigue and loss of accuracy. This paper describes a machine vision system designed to perform color classification in real-time. Applications for sorting a variety of agricultural products are included: e.g. seeds, meat, baked goods, plant and wood.FIrst the theory of color classification of agricultural and biological materials is introduced. Then, some tools for classifier development are presented. Finally, the implementation of the algorithm on real-time image processing hardware and example applications for industry is described. This paper also presented an image analysis algorithm and a prototype machine vision system which was developed for industry. This system will automatically locate the surface of some plants using digital camera and predict information such as size, potential value and type of this plant. The algorithm developed will be feasible for real-time identification in an industrial environment.
Real-time machine vision system using FPGA and soft-core processor
NASA Astrophysics Data System (ADS)
Malik, Abdul Waheed; Thörnberg, Benny; Meng, Xiaozhou; Imran, Muhammad
2012-06-01
This paper presents a machine vision system for real-time computation of distance and angle of a camera from reference points in the environment. Image pre-processing, component labeling and feature extraction modules were modeled at Register Transfer (RT) level and synthesized for implementation on field programmable gate arrays (FPGA). The extracted image component features were sent from the hardware modules to a soft-core processor, MicroBlaze, for computation of distance and angle. A CMOS imaging sensor operating at a clock frequency of 27MHz was used in our experiments to produce a video stream at the rate of 75 frames per second. Image component labeling and feature extraction modules were running in parallel having a total latency of 13ms. The MicroBlaze was interfaced with the component labeling and feature extraction modules through Fast Simplex Link (FSL). The latency for computing distance and angle of camera from the reference points was measured to be 2ms on the MicroBlaze, running at 100 MHz clock frequency. In this paper, we present the performance analysis, device utilization and power consumption for the designed system. The FPGA based machine vision system that we propose has high frame speed, low latency and a power consumption that is much lower compared to commercially available smart camera solutions.
A Vision-Aided 3D Path Teaching Method before Narrow Butt Joint Welding.
Zeng, Jinle; Chang, Baohua; Du, Dong; Peng, Guodong; Chang, Shuhe; Hong, Yuxiang; Wang, Li; Shan, Jiguo
2017-05-11
For better welding quality, accurate path teaching for actuators must be achieved before welding. Due to machining errors, assembly errors, deformations, etc., the actual groove position may be different from the predetermined path. Therefore, it is significant to recognize the actual groove position using machine vision methods and perform an accurate path teaching process. However, during the teaching process of a narrow butt joint, the existing machine vision methods may fail because of poor adaptability, low resolution, and lack of 3D information. This paper proposes a 3D path teaching method for narrow butt joint welding. This method obtains two kinds of visual information nearly at the same time, namely 2D pixel coordinates of the groove in uniform lighting condition and 3D point cloud data of the workpiece surface in cross-line laser lighting condition. The 3D position and pose between the welding torch and groove can be calculated after information fusion. The image resolution can reach 12.5 μm. Experiments are carried out at an actuator speed of 2300 mm/min and groove width of less than 0.1 mm. The results show that this method is suitable for groove recognition before narrow butt joint welding and can be applied in path teaching fields of 3D complex components.
Scheirer, Walter J; de Rezende Rocha, Anderson; Sapkota, Archana; Boult, Terrance E
2013-07-01
To date, almost all experimental evaluations of machine learning-based recognition algorithms in computer vision have taken the form of "closed set" recognition, whereby all testing classes are known at training time. A more realistic scenario for vision applications is "open set" recognition, where incomplete knowledge of the world is present at training time, and unknown classes can be submitted to an algorithm during testing. This paper explores the nature of open set recognition and formalizes its definition as a constrained minimization problem. The open set recognition problem is not well addressed by existing algorithms because it requires strong generalization. As a step toward a solution, we introduce a novel "1-vs-set machine," which sculpts a decision space from the marginal distances of a 1-class or binary SVM with a linear kernel. This methodology applies to several different applications in computer vision where open set recognition is a challenging problem, including object recognition and face verification. We consider both in this work, with large scale cross-dataset experiments performed over the Caltech 256 and ImageNet sets, as well as face matching experiments performed over the Labeled Faces in the Wild set. The experiments highlight the effectiveness of machines adapted for open set evaluation compared to existing 1-class and binary SVMs for the same tasks.
NASA Technical Reports Server (NTRS)
Allen, B. Danette; Cross, Charles D.; Motter, Mark A.; Neilan, James H.; Qualls, Garry D.; Rothhaar, Paul M.; Tran, Loc; Trujillo, Anna C.; Crisp, Vicki K.
2015-01-01
NASA aeronautics research has made decades of contributions to aviation. Both aircraft and air traffic management (ATM) systems in use today contain NASA-developed and NASA sponsored technologies that improve safety and efficiency. Recent innovations in robotics and autonomy for automobiles and unmanned systems point to a future with increased personal mobility and access to transportation, including aviation. Automation and autonomous operations will transform the way we move people and goods. Achieving this mobility will require safe, robust, reliable operations for both the vehicle and the airspace and challenges to this inevitable future are being addressed now in government labs, universities, and industry. These challenges are the focus of NASA Langley Research Center's Autonomy Incubator whose R&D portfolio includes mission planning, trajectory and path planning, object detection and avoidance, object classification, sensor fusion, controls, machine learning, computer vision, human-machine teaming, geo-containment, open architecture design and development, as well as the test and evaluation environment that will be critical to prove system reliability and support certification. Safe autonomous operations will be enabled via onboard sensing and perception systems in both data-rich and data-deprived environments. Applied autonomy will enable safety, efficiency and unprecedented mobility as people and goods take to the skies tomorrow just as we do on the road today.
Bio-Inspired Sensing and Imaging of Polarization Information in Nature
2008-05-04
polarization imaging,” Appl. Opt. 36, 150–155 (1997). 5. L. B. Wolff, “Polarization camera for computer vision with a beam splitter ,” J. Opt. Soc. Am. A...vision with a beam splitter ,” J. Opt. Soc. Am. A 11, 2935–2945 (1994). 2. L. B. Wolff and A. G. Andreou, “Polarization camera sensors,” Image Vis. Comput...group we have been developing various man-made, non -invasive imaging methodologies, sensing schemes, camera systems, and visualization and display
Machine vision method for online surface inspection of easy open can ends
NASA Astrophysics Data System (ADS)
Mariño, Perfecto; Pastoriza, Vicente; Santamaría, Miguel
2006-10-01
Easy open can end manufacturing process in the food canning sector currently makes use of a manual, non-destructive testing procedure to guarantee can end repair coating quality. This surface inspection is based on a visual inspection made by human inspectors. Due to the high production rate (100 to 500 ends per minute) only a small part of each lot is verified (statistical sampling), then an automatic, online, inspection system, based on machine vision, has been developed to improve this quality control. The inspection system uses a fuzzy model to make the acceptance/rejection decision for each can end from the information obtained by the vision sensor. In this work, the inspection method is presented. This surface inspection system checks the total production, classifies the ends in agreement with an expert human inspector, supplies interpretability to the operators in order to find out the failure causes and reduce mean time to repair during failures, and allows to modify the minimum can end repair coating quality.
Predicting pork loin intramuscular fat using computer vision system.
Liu, J-H; Sun, X; Young, J M; Bachmeier, L A; Newman, D J
2018-09-01
The objective of this study was to investigate the ability of computer vision system to predict pork intramuscular fat percentage (IMF%). Center-cut loin samples (n = 85) were trimmed of subcutaneous fat and connective tissue. Images were acquired and pixels were segregated to estimate image IMF% and 18 image color features for each image. Subjective IMF% was determined by a trained grader. Ether extract IMF% was calculated using ether extract method. Image color features and image IMF% were used as predictors for stepwise regression and support vector machine models. Results showed that subjective IMF% had a correlation of 0.81 with ether extract IMF% while the image IMF% had a 0.66 correlation with ether extract IMF%. Accuracy rates for regression models were 0.63 for stepwise and 0.75 for support vector machine. Although subjective IMF% has shown to have better prediction, results from computer vision system demonstrates the potential of being used as a tool in predicting pork IMF% in the future. Copyright © 2018 Elsevier Ltd. All rights reserved.
Experimental results in autonomous landing approaches by dynamic machine vision
NASA Astrophysics Data System (ADS)
Dickmanns, Ernst D.; Werner, Stefan; Kraus, S.; Schell, R.
1994-07-01
The 4-D approach to dynamic machine vision, exploiting full spatio-temporal models of the process to be controlled, has been applied to on board autonomous landing approaches of aircraft. Aside from image sequence processing, for which it was developed initially, it is also used for data fusion from a range of sensors. By prediction error feedback an internal representation of the aircraft state relative to the runway in 3-D space and time is servo- maintained in the interpretation process, from which the control applications required are being derived. The validity and efficiency of the approach have been proven both in hardware- in-the-loop simulations and in flight experiments with a twin turboprop aircraft Do128 under perturbations from cross winds and wind gusts. The software package has been ported to `C' and onto a new transputer image processing platform; the system has been expanded for bifocal vision with two cameras of different focal length mounted fixed relative to each other on a two-axes platform for viewing direction control.
Autonomous proximity operations using machine vision for trajectory control and pose estimation
NASA Technical Reports Server (NTRS)
Cleghorn, Timothy F.; Sternberg, Stanley R.
1991-01-01
A machine vision algorithm was developed which permits guidance control to be maintained during autonomous proximity operations. At present this algorithm exists as a simulation, running upon an 80386 based personal computer, using a ModelMATE CAD package to render the target vehicle. However, the algorithm is sufficiently simple, so that following off-line training on a known target vehicle, it should run in real time with existing vision hardware. The basis of the algorithm is a sequence of single camera images of the target vehicle, upon which radial transforms were performed. Selected points of the resulting radial signatures are fed through a decision tree, to determine whether the signature matches that of the known reference signatures for a particular view of the target. Based upon recognized scenes, the position of the maneuvering vehicle with respect to the target vehicles can be calculated, and adjustments made in the former's trajectory. In addition, the pose and spin rates of the target satellite can be estimated using this method.
What is consciousness, and could machines have it?
Dehaene, Stanislas; Lau, Hakwan; Kouider, Sid
2017-10-27
The controversial question of whether machines may ever be conscious must be based on a careful consideration of how consciousness arises in the only physical system that undoubtedly possesses it: the human brain. We suggest that the word "consciousness" conflates two different types of information-processing computations in the brain: the selection of information for global broadcasting, thus making it flexibly available for computation and report (C1, consciousness in the first sense), and the self-monitoring of those computations, leading to a subjective sense of certainty or error (C2, consciousness in the second sense). We argue that despite their recent successes, current machines are still mostly implementing computations that reflect unconscious processing (C0) in the human brain. We review the psychological and neural science of unconscious (C0) and conscious computations (C1 and C2) and outline how they may inspire novel machine architectures. Copyright © 2017, American Association for the Advancement of Science.
The East, the West and the universal machine.
Marchal, Bruno
2017-12-01
After reviewing the basic of theology of Universal Numbers/Machines, as detailed in Marchal (2007), I illustrate how that body of thought might be used to shed some light upon the apparent dichotomy in Eastern/Western spirituality. This paper relies entirely on my previous interdisciplinary work in mathematical logic, computer science and machine's theology, where "theology" is used here in the sense of Plato: it is the truth, or the "truth-theory" (in the sense of logicians) about a machine that the machine can either deduce from some of its primitive beliefs, or can be intuited in some sense that eventually is made clear through the modal logic of machine self-reference. Such a theology appears to be testable, because it has been shown that physics has to be necessarily retrieved from it when we assume the mechanist hypothesis in the cognitive sciences, and this in a unique precise (introspective) way, so that we only need to compare the physics of the introspective machine with the physics inferred from the human observation; and up to now, it is the only theory known to fit both the existence of personal "consciousness" (undoubtable yet unjustifiable truth) and quanta and quantum relationships (Marchal, 1998; Marchal, 2004; Marchal, 2013; Marchal, 2015). Copyright © 2017 Elsevier Ltd. All rights reserved.
Proceedings of the 1986 IEEE international conference on systems, man and cybernetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1986-01-01
This book presents the papers given at a conference on man-machine systems. Topics considered at the conference included neural model-based cognitive theory and engineering, user interfaces, adaptive and learning systems, human interaction with robotics, decision making, the testing and evaluation of expert systems, software development, international conflict resolution, intelligent interfaces, automation in man-machine system design aiding, knowledge acquisition in expert systems, advanced architectures for artificial intelligence, pattern recognition, knowledge bases, and machine vision.
Development of a Machine-Vision System for Recording of Force Calibration Data
NASA Astrophysics Data System (ADS)
Heamawatanachai, Sumet; Chaemthet, Kittipong; Changpan, Tawat
This paper presents the development of a new system for recording of force calibration data using machine vision technology. Real time camera and computer system were used to capture images of the reading from the instruments during calibration. Then, the measurement images were transformed and translated to numerical data using optical character recognition (OCR) technique. These numerical data along with raw images were automatically saved to memories as the calibration database files. With this new system, the human error of recording would be eliminated. The verification experiments were done by using this system for recording the measurement results from an amplifier (DMP 40) with load cell (HBM-Z30-10kN). The NIMT's 100-kN deadweight force standard machine (DWM-100kN) was used to generate test forces. The experiments setup were done in 3 categories; 1) dynamics condition (record during load changing), 2) statics condition (record during fix load), and 3) full calibration experiments in accordance with ISO 376:2011. The captured images from dynamics condition experiment gave >94% without overlapping of number. The results from statics condition experiment were >98% images without overlapping. All measurement images without overlapping were translated to number by the developed program with 100% accuracy. The full calibration experiments also gave 100% accurate results. Moreover, in case of incorrect translation of any result, it is also possible to trace back to the raw calibration image to check and correct it. Therefore, this machine-vision-based system and program should be appropriate for recording of force calibration data.
Minati, Ludovico; Nigri, Anna; Rosazza, Cristina; Bruzzone, Maria Grazia
2012-06-01
Previous studies have demonstrated the possibility of using functional MRI to control a robot arm through a brain-machine interface by directly coupling haemodynamic activity in the sensory-motor cortex to the position of two axes. Here, we extend this work by implementing interaction at a more abstract level, whereby imagined actions deliver structured commands to a robot arm guided by a machine vision system. Rather than extracting signals from a small number of pre-selected regions, the proposed system adaptively determines at individual level how to map representative brain areas to the input nodes of a classifier network. In this initial study, a median action recognition accuracy of 90% was attained on five volunteers performing a game consisting of collecting randomly positioned coloured pawns and placing them into cups. The "pawn" and "cup" instructions were imparted through four mental imaginery tasks, linked to robot arm actions by a state machine. With the current implementation in MatLab language the median action recognition time was 24.3s and the robot execution time was 17.7s. We demonstrate the notion of combining haemodynamic brain-machine interfacing with computer vision to implement interaction at the level of high-level commands rather than individual movements, which may find application in future fMRI approaches relevant to brain-lesioned patients, and provide source code supporting further work on larger command sets and real-time processing. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.
High-Resolution Adaptive Optics Test-Bed for Vision Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilks, S C; Thomspon, C A; Olivier, S S
2001-09-27
We discuss the design and implementation of a low-cost, high-resolution adaptive optics test-bed for vision research. It is well known that high-order aberrations in the human eye reduce optical resolution and limit visual acuity. However, the effects of aberration-free eyesight on vision are only now beginning to be studied using adaptive optics to sense and correct the aberrations in the eye. We are developing a high-resolution adaptive optics system for this purpose using a Hamamatsu Parallel Aligned Nematic Liquid Crystal Spatial Light Modulator. Phase-wrapping is used to extend the effective stroke of the device, and the wavefront sensing and wavefrontmore » correction are done at different wavelengths. Issues associated with these techniques will be discussed.« less
A Review of Current Neuromorphic Approaches for Vision, Auditory, and Olfactory Sensors.
Vanarse, Anup; Osseiran, Adam; Rassau, Alexander
2016-01-01
Conventional vision, auditory, and olfactory sensors generate large volumes of redundant data and as a result tend to consume excessive power. To address these shortcomings, neuromorphic sensors have been developed. These sensors mimic the neuro-biological architecture of sensory organs using aVLSI (analog Very Large Scale Integration) and generate asynchronous spiking output that represents sensing information in ways that are similar to neural signals. This allows for much lower power consumption due to an ability to extract useful sensory information from sparse captured data. The foundation for research in neuromorphic sensors was laid more than two decades ago, but recent developments in understanding of biological sensing and advanced electronics, have stimulated research on sophisticated neuromorphic sensors that provide numerous advantages over conventional sensors. In this paper, we review the current state-of-the-art in neuromorphic implementation of vision, auditory, and olfactory sensors and identify key contributions across these fields. Bringing together these key contributions we suggest a future research direction for further development of the neuromorphic sensing field.
3D visual mechinism by neural networkings
NASA Astrophysics Data System (ADS)
Sugiyama, Shigeki
2007-04-01
There are some computer vision systems that are available on a market but those are quite far from a real usage of our daily life in a sense of security guard or in a sense of a usage of recognition of a target object behaviour. Because those surroundings' sensing might need to recognize a detail description of an object, like "the distance to an object" and "an object detail figure" and "its figure of edging", which are not possible to have a clear picture of the mechanisms of them with the present recognition system. So for doing this, here studies on mechanisms of how a pair of human eyes can recognize a distance apart, an object edging, and an object in order to get basic essences of vision mechanisms. And those basic mechanisms of object recognition are simplified and are extended logically for applying to a computer vision system. Some of the results of these studies are introduced on this paper.
Kiani, Sajad; Minaei, Saeid
2016-12-01
Saffron quality characterization is an important issue in the food industry and of interest to the consumers. This paper proposes an expert system based on the application of machine vision technology for characterization of saffron and shows how it can be employed in practical usage. There is a correlation between saffron color and its geographic location of production and some chemical attributes which could be properly used for characterization of saffron quality and freshness. This may be accomplished by employing image processing techniques coupled with multivariate data analysis for quantification of saffron properties. Expert algorithms can be made available for prediction of saffron characteristics such as color as well as for product classification. Copyright © 2016. Published by Elsevier Ltd.
Forest mensuration with remote sensing: A retrospective and a vision for the future
Randolph H. Wynne
2004-01-01
Remote sensing, while occasionally oversold, has clear potential to reduce the overall cost of traditional forest inventories. Perhaps most important, some of the information needed for more intensive, rather than extensive, forest management is available from remote sensing. These new information needs may justify increased use and the increased cost of remote sensing...
Mechanics of Multifunctional Materials & Microsystems
2012-03-09
Mechanics of Materials; Life Prediction (Materials & Micro-devices); Sensing, Precognition & Diagnosis; Multifunctional Design of Autonomic...Life Prediction (Materials & Micro-devices); Sensing, Precognition & Diagnosis; Multifunctional Design of Autonomic Systems; Multifunctional...release; distribution is unlimited. 7 VISION: EXPANDED • site specific • autonomic AUTONOMIC AEROSPACE STRUCTURES • Sensing & Precognition • Self
Aquatic Toxic Analysis by Monitoring Fish Behavior Using Computer Vision: A Recent Progress
Fu, Longwen; Liu, Zuoyi
2018-01-01
Video tracking based biological early warning system achieved a great progress with advanced computer vision and machine learning methods. Ability of video tracking of multiple biological organisms has been largely improved in recent years. Video based behavioral monitoring has become a common tool for acquiring quantified behavioral data for aquatic risk assessment. Investigation of behavioral responses under chemical and environmental stress has been boosted by rapidly developed machine learning and artificial intelligence. In this paper, we introduce the fundamental of video tracking and present the pioneer works in precise tracking of a group of individuals in 2D and 3D space. Technical and practical issues suffered in video tracking are explained. Subsequently, the toxic analysis based on fish behavioral data is summarized. Frequently used computational methods and machine learning are explained with their applications in aquatic toxicity detection and abnormal pattern analysis. Finally, advantages of recent developed deep learning approach in toxic prediction are presented. PMID:29849612
A Developmental Approach to Machine Learning?
Smith, Linda B.; Slone, Lauren K.
2017-01-01
Visual learning depends on both the algorithms and the training material. This essay considers the natural statistics of infant- and toddler-egocentric vision. These natural training sets for human visual object recognition are very different from the training data fed into machine vision systems. Rather than equal experiences with all kinds of things, toddlers experience extremely skewed distributions with many repeated occurrences of a very few things. And though highly variable when considered as a whole, individual views of things are experienced in a specific order – with slow, smooth visual changes moment-to-moment, and developmentally ordered transitions in scene content. We propose that the skewed, ordered, biased visual experiences of infants and toddlers are the training data that allow human learners to develop a way to recognize everything, both the pervasively present entities and the rarely encountered ones. The joint consideration of real-world statistics for learning by researchers of human and machine learning seems likely to bring advances in both disciplines. PMID:29259573
A machine vision system for micro-EDM based on linux
NASA Astrophysics Data System (ADS)
Guo, Rui; Zhao, Wansheng; Li, Gang; Li, Zhiyong; Zhang, Yong
2006-11-01
Due to the high precision and good surface quality that it can give, Electrical Discharge Machining (EDM) is potentially an important process for the fabrication of micro-tools and micro-components. However, a number of issues remain unsolved before micro-EDM becomes a reliable process with repeatable results. To deal with the difficulties in micro electrodes on-line fabrication and tool wear compensation, a micro-EDM machine vision system is developed with a Charge Coupled Device (CCD) camera, with an optical resolution of 1.61μm and an overall magnification of 113~729. Based on the Linux operating system, an image capturing program is developed with the V4L2 API, and an image processing program is exploited by using OpenCV. The contour of micro electrodes can be extracted by means of the Canny edge detector. Through the system calibration, the micro electrodes diameter can be measured on-line. Experiments have been carried out to prove its performance, and the reasons of measurement error are also analyzed.
Learning from vision-to-touch is different than learning from touch-to-vision.
Wismeijer, Dagmar A; Gegenfurtner, Karl R; Drewing, Knut
2012-01-01
We studied whether vision can teach touch to the same extent as touch seems to teach vision. In a 2 × 2 between-participants learning study, we artificially correlated visual gloss cues with haptic compliance cues. In two "natural" tasks, we tested whether visual gloss estimations have an influence on haptic estimations of softness and vice versa. In two "novel" tasks, in which participants were either asked to haptically judge glossiness or to visually judge softness, we investigated how perceptual estimates transfer from one sense to the other. Our results showed that vision does not teach touch as efficient as touch seems to teach vision.
Development of a Computer Vision Technology for the Forest Products Manufacturing Industry
D. Earl Kline; Richard Conners; Philip A. Araman
1992-01-01
The goal of this research is to create an automated processing/grading system for hardwood lumber that will be of use to the forest products industry. The objective of creating a full scale machine vision prototype for inspecting hardwood lumber will become a reality in calendar year 1992. Space for the full scale prototype has been created at the Brooks Forest...
High-Level Vision and Planning Workshop Proceedings
1989-08-01
Correspondence in Line Drawings of Multiple View-. In Proc. of 8th Intern. Joint Conf. on Artificial intellignece . 1983. [63] Tomiyasu, K. Tutorial...joint U.S.-Israeli workshop on artificial intelligence are provided in this Institute for Defense Analyses document. This document is based on a broad...participants is provided along with applicable references for individual papers. 14. SUBJECT TERMS 15. NUMBER OF PAGES Artificial Intelligence; Machine Vision
ERIC Educational Resources Information Center
Dollerup, Cay, Ed.; Lindegaard, Annette, Ed.
This selection of papers starts with insights into multi- and plurilingual settings, then proceeds to discussions of aims for practical work with students, and ends with visions of future developments within translation for the mass media and the impact of machine translation. Papers are: "Interpreting at the European Commission";…
ERIC Educational Resources Information Center
Hotchkiss, Wesley
1976-01-01
Author states that religion involves cosmic vision as well as ethical philosophy, and that religious ethics become impotent without this vision. Mystery, defined as the sense of wonder at the revelation of the nature of the cosmos, can be expressed only through art. (RW)
Machine Learning and Computer Vision System for Phenotype Data Acquisition and Analysis in Plants.
Navarro, Pedro J; Pérez, Fernando; Weiss, Julia; Egea-Cortines, Marcos
2016-05-05
Phenomics is a technology-driven approach with promising future to obtain unbiased data of biological systems. Image acquisition is relatively simple. However data handling and analysis are not as developed compared to the sampling capacities. We present a system based on machine learning (ML) algorithms and computer vision intended to solve the automatic phenotype data analysis in plant material. We developed a growth-chamber able to accommodate species of various sizes. Night image acquisition requires near infrared lightning. For the ML process, we tested three different algorithms: k-nearest neighbour (kNN), Naive Bayes Classifier (NBC), and Support Vector Machine. Each ML algorithm was executed with different kernel functions and they were trained with raw data and two types of data normalisation. Different metrics were computed to determine the optimal configuration of the machine learning algorithms. We obtained a performance of 99.31% in kNN for RGB images and a 99.34% in SVM for NIR. Our results show that ML techniques can speed up phenomic data analysis. Furthermore, both RGB and NIR images can be segmented successfully but may require different ML algorithms for segmentation.
Some Examples Of Image Warping For Low Vision Prosthesis
NASA Astrophysics Data System (ADS)
Juday, Richard D.; Loshin, David S.
1988-08-01
NASA and Texas Instruments have developed an image processor, the Programmable Remapper 1, for certain functions in machine vision. The Remapper performs a highly arbitrary geometric warping of an image at video rate. It might ultimately be shrunk to a size and cost that could allow its use in a low-vision prosthesis. We have developed coordinate warpings for retinitis pigmentosa (tunnel vision) and for maculapathy (loss of central field) that are intended to make best use of the patient's remaining viable retina. The rationales and mathematics are presented for some warpings that we will try in clinical studies using the Remapper's prototype. (Recorded video imagery was shown at the conference for the maculapathy remapping.
Supersymmetry For Cognitive Science
NASA Astrophysics Data System (ADS)
Flanagan, Brian J.
1989-03-01
Machine vision may be understood as an attempt to replicate natural vision. The latter process is associated with neural networks. Light enters the eye and sets in motion processes which culminate in observed patterns of color. Light is, of course, an electromagnetic phenomenon. Our nerve cells communicate with each other via electrochemical means. To say that a process is electrochemical is to say that it is electromagnetic, involving the exchange of photons among electrons. It seems, therefore, that we ought to be able to understand vision in terms of the physical theory of electromagnetism. Historically, however, it has been held that such properties as color do not belong to the physical world. Color has long been considered to be a mental effect of physical stimuli. Nevertheless, it is generally understood that color is related to the energy, wavelength, and frequency of the photons which give rise to the "mental" impression of hue and intensity and so forth. Similar arguments and propositions can be made for all of the sensory modalities, but we will restrict our attention to vision for the time being. If, with Mach, we accept that colors are physical objects, we are obliged to seek a suitable place for them within the body of physical theory. Where should we locate them? Colors are given to us as simple entities, having no parts: We can point to an object that is blue, but we cannot say what blue is. Color is given to us as elemental. In a formal theory, we have a number of elements, rules for joining them, well-formed formulae, and methods of proof. It seems to make good sense to place color among the elements of a formal theory (T). If our mind/brains can be modelled by a formal theory, it follows logically that we should not be able to define our elements - i.e., if we could define our elements, they would not be elements.
ERIC Educational Resources Information Center
Pill, Shane
2012-01-01
"Game sense" is a sport-specific iteration of the teaching games for understanding model, designed to balance physical development of motor skill and fitness with the development of game understanding. Game sense can foster a shared vision for sport learning that bridges school physical education and community sport. This article explains how to…
Methodology for creating dedicated machine and algorithm on sunflower counting
NASA Astrophysics Data System (ADS)
Muracciole, Vincent; Plainchault, Patrick; Mannino, Maria-Rosaria; Bertrand, Dominique; Vigouroux, Bertrand
2007-09-01
In order to sell grain lots in European countries, seed industries need a government certification. This certification requests purity testing, seed counting in order to quantify specified seed species and other impurities in lots, and germination testing. These analyses are carried out within the framework of international trade according to the methods of the International Seed Testing Association. Presently these different analyses are still achieved manually by skilled operators. Previous works have already shown that seeds can be characterized by around 110 visual features (morphology, colour, texture), and thus have presented several identification algorithms. Until now, most of the works in this domain are computer based. The approach presented in this article is based on the design of dedicated electronic vision machine aimed to identify and sort seeds. This machine is composed of a FPGA (Field Programmable Gate Array), a DSP (Digital Signal Processor) and a PC bearing the GUI (Human Machine Interface) of the system. Its operation relies on the stroboscopic image acquisition of a seed falling in front of a camera. A first machine was designed according to this approach, in order to simulate all the vision chain (image acquisition, feature extraction, identification) under the Matlab environment. In order to perform this task into dedicated hardware, all these algorithms were developed without the use of the Matlab toolbox. The objective of this article is to present a design methodology for a special purpose identification algorithm based on distance between groups into dedicated hardware machine for seed counting.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Zheng; Ukida, H.; Ramuhalli, Pradeep
2010-06-05
Imaging- and vision-based techniques play an important role in industrial inspection. The sophistication of the techniques assures high- quality performance of the manufacturing process through precise positioning, online monitoring, and real-time classification. Advanced systems incorporating multiple imaging and/or vision modalities provide robust solutions to complex situations and problems in industrial applications. A diverse range of industries, including aerospace, automotive, electronics, pharmaceutical, biomedical, semiconductor, and food/beverage, etc., have benefited from recent advances in multi-modal imaging, data fusion, and computer vision technologies. Many of the open problems in this context are in the general area of image analysis methodologies (preferably in anmore » automated fashion). This editorial article introduces a special issue of this journal highlighting recent advances and demonstrating the successful applications of integrated imaging and vision technologies in industrial inspection.« less
Nondestructive and rapid detection of potato black heart based on machine vision technology
NASA Astrophysics Data System (ADS)
Tian, Fang; Peng, Yankun; Wei, Wensong
2016-05-01
Potatoes are one of the major food crops in the world. Potato black heart is a kind of defect that the surface is intact while the tissues in skin become black. This kind of potato has lost the edibleness, but it's difficult to be detected with conventional methods. A nondestructive detection system based on the machine vision technology was proposed in this study to distinguish the normal and black heart of potatoes according to the different transmittance of them. The detection system was equipped with a monochrome CCD camera, LED light sources for transmitted illumination and a computer. Firstly, the transmission images of normal and black heart potatoes were taken by the detection system. Then the images were processed by algorithm written with VC++. As the transmitted light intensity was influenced by the radial dimension of the potato samples, the relationship between the grayscale value and the potato radial dimension was acquired by analyzing the grayscale value changing rule of the transmission image. Then proper judging condition was confirmed to distinguish the normal and black heart of potatoes after image preprocessing. The results showed that the nondestructive system built coupled with the processing methods was accessible for the detection of potato black heart at a considerable accuracy rate. The transmission detection technique based on machine vision is nondestructive and feasible to realize the detection of potato black heart.
NASA Astrophysics Data System (ADS)
Ball, John E.; Anderson, Derek T.; Chan, Chee Seng
2017-10-01
In recent years, deep learning (DL), a rebranding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, and natural language processing. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV, e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should not only be aware of advancements such as DL, but also be leading researchers in this area. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools, and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as they relate to (i) inadequate data sets, (ii) human-understandable solutions for modeling physical phenomena, (iii) big data, (iv) nontraditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial, and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.
NASA Astrophysics Data System (ADS)
Kestur, Ramesh; Farooq, Shariq; Abdal, Rameen; Mehraj, Emad; Narasipura, Omkar; Mudigere, Meenavathi
2018-01-01
Road extraction in imagery acquired by low altitude remote sensing (LARS) carried out using an unmanned aerial vehicle (UAV) is presented. LARS is carried out using a fixed wing UAV with a high spatial resolution vision spectrum (RGB) camera as the payload. Deep learning techniques, particularly fully convolutional network (FCN), are adopted to extract roads by dense semantic segmentation. The proposed model, UFCN (U-shaped FCN) is an FCN architecture, which is comprised of a stack of convolutions followed by corresponding stack of mirrored deconvolutions with the usage of skip connections in between for preserving the local information. The limited dataset (76 images and their ground truths) is subjected to real-time data augmentation during training phase to increase the size effectively. Classification performance is evaluated using precision, recall, accuracy, F1 score, and brier score parameters. The performance is compared with support vector machine (SVM) classifier, a one-dimensional convolutional neural network (1D-CNN) model, and a standard two-dimensional CNN (2D-CNN). The UFCN model outperforms the SVM, 1D-CNN, and 2D-CNN models across all the performance parameters. Further, the prediction time of the proposed UFCN model is comparable with SVM, 1D-CNN, and 2D-CNN models.
The role of personal purpose and personal goals in symbiotic visions.
Berg, Jodi L
2015-01-01
It is believed that symbiotic visions can drive employees and organizations toward a common objective based on the premise that people have a high level of self-motivation and engagement when they are working toward something very personal. The field of organizational development has been aspiring to help organizations and people align their visions for decades without much, if any, empirical support for the role of personal purpose and goals in the symbiotic relationship with a company vision. This qualitative study examines the role personal purpose and goals play in how high performing leaders align to their company's vision. Whether and how senior managers articulate this alignment, and its correlation to their motivation and engagement, was examined. An observation was that most senior managers within organizations with a well-developed and widely known higher purpose vision are driven by something personal, identified as either personal goals or a personal purpose. One of the key findings is that personal purpose and goals, when aligned to a company vision, appear to impact motivation and engagement in different ways. When alignment is felt through the sense of the greater purpose, there is a deep, almost spiritual, commitment to making the world a better place and helping the organization contribute to that. This seems to motivate them to guide the organization toward its higher purpose vision. When alignment is felt through the organization's alignment to one's personal goals, there is a great sense of commitment to completing the steps or tasks necessary to move toward the vision, yet a clear delineation between work and life ambitions.
The role of personal purpose and personal goals in symbiotic visions
Berg, Jodi L.
2015-01-01
It is believed that symbiotic visions can drive employees and organizations toward a common objective based on the premise that people have a high level of self-motivation and engagement when they are working toward something very personal. The field of organizational development has been aspiring to help organizations and people align their visions for decades without much, if any, empirical support for the role of personal purpose and goals in the symbiotic relationship with a company vision. This qualitative study examines the role personal purpose and goals play in how high performing leaders align to their company's vision. Whether and how senior managers articulate this alignment, and its correlation to their motivation and engagement, was examined. An observation was that most senior managers within organizations with a well-developed and widely known higher purpose vision are driven by something personal, identified as either personal goals or a personal purpose. One of the key findings is that personal purpose and goals, when aligned to a company vision, appear to impact motivation and engagement in different ways. When alignment is felt through the sense of the greater purpose, there is a deep, almost spiritual, commitment to making the world a better place and helping the organization contribute to that. This seems to motivate them to guide the organization toward its higher purpose vision. When alignment is felt through the organization's alignment to one's personal goals, there is a great sense of commitment to completing the steps or tasks necessary to move toward the vision, yet a clear delineation between work and life ambitions. PMID:25926808
Learning Control: Sense-Making, CNC Machines, and Changes in Vocational Training for Industrial Work
ERIC Educational Resources Information Center
Berner, Boel
2009-01-01
The paper explores how novices in school-based vocational training make sense of computerized numerical control (CNC) machines. Based on two ethnographic studies in Swedish schools, one from the early 1980s and one from 2006, it analyses change and continuity in the cognitive, social, and emotional processes of learning how to become a machine…
USAF Summer Faculty Research Program. 1981 Research Reports. Volume I.
1981-10-01
Kent, OH 44242 (216) 672-2816 Dr. Martin D. Altschuler Degree: PhD, Physics and Astronomy, 1964 Associate Professor Specialty: Robot Vision, Surface...line inspection and control, computer- aided manufacturing, robot vision, mapping of machine parts and castings, etc. The technique we developed...posture, reduced healing time and bacteria level, and improved capacity for work endurance and efficiency. 1 ,2 Federal agencies, such as the FDA and
Learning from vision-to-touch is different than learning from touch-to-vision
Wismeijer, Dagmar A.; Gegenfurtner, Karl R.; Drewing, Knut
2012-01-01
We studied whether vision can teach touch to the same extent as touch seems to teach vision. In a 2 × 2 between-participants learning study, we artificially correlated visual gloss cues with haptic compliance cues. In two “natural” tasks, we tested whether visual gloss estimations have an influence on haptic estimations of softness and vice versa. In two “novel” tasks, in which participants were either asked to haptically judge glossiness or to visually judge softness, we investigated how perceptual estimates transfer from one sense to the other. Our results showed that vision does not teach touch as efficient as touch seems to teach vision. PMID:23181012
Actualities and Development of Heavy-Duty CNC Machine Tool Thermal Error Monitoring Technology
NASA Astrophysics Data System (ADS)
Zhou, Zu-De; Gui, Lin; Tan, Yue-Gang; Liu, Ming-Yao; Liu, Yi; Li, Rui-Ya
2017-09-01
Thermal error monitoring technology is the key technological support to solve the thermal error problem of heavy-duty CNC (computer numerical control) machine tools. Currently, there are many review literatures introducing the thermal error research of CNC machine tools, but those mainly focus on the thermal issues in small and medium-sized CNC machine tools and seldom introduce thermal error monitoring technologies. This paper gives an overview of the research on the thermal error of CNC machine tools and emphasizes the study of thermal error of the heavy-duty CNC machine tool in three areas. These areas are the causes of thermal error of heavy-duty CNC machine tool and the issues with the temperature monitoring technology and thermal deformation monitoring technology. A new optical measurement technology called the "fiber Bragg grating (FBG) distributed sensing technology" for heavy-duty CNC machine tools is introduced in detail. This technology forms an intelligent sensing and monitoring system for heavy-duty CNC machine tools. This paper fills in the blank of this kind of review articles to guide the development of this industry field and opens up new areas of research on the heavy-duty CNC machine tool thermal error.
Insect vision as model for machine vision
NASA Astrophysics Data System (ADS)
Osorio, D.; Sobey, Peter J.
1992-11-01
The neural architecture, neurophysiology and behavioral abilities of insect vision are described, and compared with that of mammals. Insects have a hardwired neural architecture of highly differentiated neurons, quite different from the cerebral cortex, yet their behavioral abilities are in important respects similar to those of mammals. These observations challenge the view that the key to the power of biological neural computation is distributed processing by a plastic, highly interconnected, network of individually undifferentiated and unreliable neurons that has been a dominant picture of biological computation since Pitts and McCulloch's seminal work in the 1940's.
University NanoSat Program: AggieSat3
2009-06-01
commercially available product for stereo machine vision developed by Point Grey Research. The current binocular BumbleBee2® system incorporates two...and Fellow of the American Society of Mechanical Engineers (ASME) in 1997. She was awarded the 2007 J. Leland "Lee" Atwood Award from the ASEE...AggieSat2 satellite programs. Additional experience gained in the area of drawing standards, machining capabilities, solid modeling, safety
Identification Of Cells With A Compact Microscope Imaging System With Intelligent Controls
NASA Technical Reports Server (NTRS)
McDowell, Mark (Inventor)
2006-01-01
A Microscope Imaging System (CMIS) with intelligent controls is disclosed that provides techniques for scanning, identifying, detecting and tracking mic?oscopic changes in selected characteristics or features of various surfaces including, but not limited to, cells, spheres, and manufactured products subject to difficult-to-see imperfections. The practice of the present invention provides applications that include colloidal hard spheres experiments, biological cell detection for patch clamping, cell movement and tracking, as well as defect identification in products, such as semiconductor devices, where surface damage can be significant, but difficult to detect. The CMIS system is a machine vision system, which combines intelligent image processing with remote control capabilities and provides the ability to autofocus on a microscope sample, automatically scan an image, and perform machine vision analysis on multiple samples simultaneously.
Man-machine interactive imaging and data processing using high-speed digital mass storage
NASA Technical Reports Server (NTRS)
Alsberg, H.; Nathan, R.
1975-01-01
The role of vision in teleoperation has been recognized as an important element in the man-machine control loop. In most applications of remote manipulation, direct vision cannot be used. To overcome this handicap, the human operator's control capabilities are augmented by a television system. This medium provides a practical and useful link between workspace and the control station from which the operator perform his tasks. Human performance deteriorates when the images are degraded as a result of instrumental and transmission limitations. Image enhancement is used to bring out selected qualities in a picture to increase the perception of the observer. A general purpose digital computer, an extensive special purpose software system is used to perform an almost unlimited repertoire of processing operations.
Tracking of Cells with a Compact Microscope Imaging System with Intelligent Controls
NASA Technical Reports Server (NTRS)
McDowell, Mark (Inventor)
2007-01-01
A Microscope Imaging System (CMIS) with intelligent controls is disclosed that provides techniques for scanning, identifying, detecting and tracking microscopic changes in selected characteristics or features of various surfaces including, but not limited to, cells, spheres, and manufactured products subject to difficult-to-see imperfections. The practice of the present invention provides applications that include colloidal hard spheres experiments, biological cell detection for patch clamping, cell movement and tracking, as well as defect identification in products, such as semiconductor devices, where surface damage can be significant, but difficult to detect. The CMIS system is a machine vision system, which combines intelligent image processing with remote control capabilities and provides the ability to autofocus on a microscope sample, automatically scan an image, and perform machine vision analysis on multiple samples simultaneously
Tracking of cells with a compact microscope imaging system with intelligent controls
NASA Technical Reports Server (NTRS)
McDowell, Mark (Inventor)
2007-01-01
A Microscope Imaging System (CMIS) with intelligent controls is disclosed that provides techniques for scanning, identifying, detecting and tracking microscopic changes in selected characteristics or features of various surfaces including, but not limited to, cells, spheres, and manufactured products subject to difficult-to-see imperfections. The practice of the present invention provides applications that include colloidal hard spheres experiments, biological cell detection for patch clamping, cell movement and tracking, as well as defect identification in products, such as semiconductor devices, where surface damage can be significant, but difficult to detect. The CMIS system is a machine vision system, which combines intelligent image processing with remote control capabilities and provides the ability to auto-focus on a microscope sample, automatically scan an image, and perform machine vision analysis on multiple samples simultaneously.
Operation of a Cartesian Robotic System in a Compact Microscope with Intelligent Controls
NASA Technical Reports Server (NTRS)
McDowell, Mark (Inventor)
2006-01-01
A Microscope Imaging System (CMIS) with intelligent controls is disclosed that provides techniques for scanning, identifying, detecting and tracking microscopic changes in selected characteristics or features of various surfaces including, but not limited to, cells, spheres, and manufactured products subject to difficult-to-see imperfections. The practice of the present invention provides applications that include colloidal hard spheres experiments, biological cell detection for patch clamping, cell movement and tracking, as well as defect identification in products, such as semiconductor devices, where surface damage can be significant, but difficult to detect. The CMIS system is a machine vision system, which combines intelligent image processing with remote control capabilities and provides the ability to autofocus on a microscope sample, automatically scan an image, and perform machine vision analysis on multiple samples simultaneously.
Local intensity adaptive image coding
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.
1989-01-01
The objective of preprocessing for machine vision is to extract intrinsic target properties. The most important properties ordinarily are structure and reflectance. Illumination in space, however, is a significant problem as the extreme range of light intensity, stretching from deep shadow to highly reflective surfaces in direct sunlight, impairs the effectiveness of standard approaches to machine vision. To overcome this critical constraint, an image coding scheme is being investigated which combines local intensity adaptivity, image enhancement, and data compression. It is very effective under the highly variant illumination that can exist within a single frame or field of view, and it is very robust to noise at low illuminations. Some of the theory and salient features of the coding scheme are reviewed. Its performance is characterized in a simulated space application, the research and development activities are described.
Integration of USB and firewire cameras in machine vision applications
NASA Astrophysics Data System (ADS)
Smith, Timothy E.; Britton, Douglas F.; Daley, Wayne D.; Carey, Richard
1999-08-01
Digital cameras have been around for many years, but a new breed of consumer market cameras is hitting the main stream. By using these devices, system designers and integrators will be well posited to take advantage of technological advances developed to support multimedia and imaging applications on the PC platform. Having these new cameras on the consumer market means lower cost, but it does not necessarily guarantee ease of integration. There are many issues that need to be accounted for like image quality, maintainable frame rates, image size and resolution, supported operating system, and ease of software integration. This paper will describe briefly a couple of the consumer digital standards, and then discuss some of the advantages and pitfalls of integrating both USB and Firewire cameras into computer/machine vision applications.
NASA Astrophysics Data System (ADS)
Beaumont, Samuel; Otero, Toribio F.
2018-07-01
Polypyrrole film electrodes are constituted by multielectronic electrochemical molecular machines (every polymeric molecule) counterions and water, mimicking the intracellular matrix of muscular cells. The influence of the electrolyte concentration on the reversible oxidation/reduction of polypyrrole films was studied in NaCl aqueous solutions by consecutive square potential waves. The consumed redox charge and the consumed electrical energy change as a function of the concentration. That means that the extension (the consumed charge) of the reaction involving conformational, or allosteric, movements of the reacting polymeric chains (molecular machines) responds to (senses) the chemical energy of the reaction ambient. A theoretical description of the attained empirical results is presented getting the sensing equations and the concomitant sensitivities. Those results could indicate the origin and nature of the neural signals sent to the brain from biological haptic muscles working by cooperative actuation of the actin-myosin molecular machines driven by chemical reactions and sensing, simultaneously, the fatigue state of the muscle.
Chung, Tien-Kan; Yeh, Po-Chen; Lee, Hao; Lin, Cheng-Mao; Tseng, Chia-Yung; Lo, Wen-Tuan; Wang, Chieh-Min; Wang, Wen-Chin; Tu, Chi-Jen; Tasi, Pei-Yuan; Chang, Jui-Wen
2016-02-23
An attachable electromagnetic-energy-harvester driven wireless vibration-sensing system for monitoring milling-processes and cutter-wear/breakage-conditions is demonstrated. The system includes an electromagnetic energy harvester, three single-axis Micro Electro-Mechanical Systems (MEMS) accelerometers, a wireless chip module, and corresponding circuits. The harvester consisting of magnets with a coil uses electromagnetic induction to harness mechanical energy produced by the rotating spindle in milling processes and consequently convert the harnessed energy to electrical output. The electrical output is rectified by the rectification circuit to power the accelerometers and wireless chip module. The harvester, circuits, accelerometer, and wireless chip are integrated as an energy-harvester driven wireless vibration-sensing system. Therefore, this completes a self-powered wireless vibration sensing system. For system testing, a numerical-controlled machining tool with various milling processes is used. According to the test results, the system is fully self-powered and able to successfully sense vibration in the milling processes. Furthermore, by analyzing the vibration signals (i.e., through analyzing the electrical outputs of the accelerometers), criteria are successfully established for the system for real-time accurate simulations of the milling-processes and cutter-conditions (such as cutter-wear conditions and cutter-breaking occurrence). Due to these results, our approach can be applied to most milling and other machining machines in factories to realize more smart machining technologies.
Chung, Tien-Kan; Yeh, Po-Chen; Lee, Hao; Lin, Cheng-Mao; Tseng, Chia-Yung; Lo, Wen-Tuan; Wang, Chieh-Min; Wang, Wen-Chin; Tu, Chi-Jen; Tasi, Pei-Yuan; Chang, Jui-Wen
2016-01-01
An attachable electromagnetic-energy-harvester driven wireless vibration-sensing system for monitoring milling-processes and cutter-wear/breakage-conditions is demonstrated. The system includes an electromagnetic energy harvester, three single-axis Micro Electro-Mechanical Systems (MEMS) accelerometers, a wireless chip module, and corresponding circuits. The harvester consisting of magnets with a coil uses electromagnetic induction to harness mechanical energy produced by the rotating spindle in milling processes and consequently convert the harnessed energy to electrical output. The electrical output is rectified by the rectification circuit to power the accelerometers and wireless chip module. The harvester, circuits, accelerometer, and wireless chip are integrated as an energy-harvester driven wireless vibration-sensing system. Therefore, this completes a self-powered wireless vibration sensing system. For system testing, a numerical-controlled machining tool with various milling processes is used. According to the test results, the system is fully self-powered and able to successfully sense vibration in the milling processes. Furthermore, by analyzing the vibration signals (i.e., through analyzing the electrical outputs of the accelerometers), criteria are successfully established for the system for real-time accurate simulations of the milling-processes and cutter-conditions (such as cutter-wear conditions and cutter-breaking occurrence). Due to these results, our approach can be applied to most milling and other machining machines in factories to realize more smart machining technologies. PMID:26907297
Intelligent image processing for machine safety
NASA Astrophysics Data System (ADS)
Harvey, Dennis N.
1994-10-01
This paper describes the use of intelligent image processing as a machine guarding technology. One or more color, linear array cameras are positioned to view the critical region(s) around a machine tool or other piece of manufacturing equipment. The image data is processed to provide indicators of conditions dangerous to the equipment via color content, shape content, and motion content. The data from these analyses is then sent to a threat evaluator. The purpose of the evaluator is to determine if a potentially machine-damaging condition exists based on the analyses of color, shape, and motion, and on `knowledge' of the specific environment of the machine. The threat evaluator employs fuzzy logic as a means of dealing with uncertainty in the vision data.
Power electromagnetic strike machine for engineering-geological surveys
NASA Astrophysics Data System (ADS)
Usanov, K. M.; Volgin, A. V.; Chetverikov, E. A.; Kargin, V. A.; Moiseev, A. P.; Ivanova, Z. I.
2017-10-01
When implementing the processes of dynamic sensing of soils and pulsed nonexplosive seismic exploration, the most common and effective method is the strike one, which is provided by a variety of structure and parameters of pneumatic, hydraulic, electrical machines of strike action. The creation of compact portable strike machines which do not require transportation and use of mechanized means is important. A promising direction in the development of strike machines is the use of pulsed electromagnetic actuator characterized by relatively low energy consumption, relatively high specific performance and efficiency, and providing direct conversion of electrical energy into mechanical work of strike mass with linear movement trajectory. The results of these studies allowed establishing on the basis of linear electromagnetic motors the electromagnetic pulse machines with portable performance for dynamic sensing of soils and land seismic pulse of small depths.
Application of machine vision to pup loaf bread evaluation
NASA Astrophysics Data System (ADS)
Zayas, Inna Y.; Chung, O. K.
1996-12-01
Intrinsic end-use quality of hard winter wheat breeding lines is routinely evaluated at the USDA, ARS, USGMRL, Hard Winter Wheat Quality Laboratory. Experimental baking test of pup loaves is the ultimate test for evaluating hard wheat quality. Computer vision was applied to developing an objective methodology for bread quality evaluation for the 1994 and 1995 crop wheat breeding line samples. Computer extracted features for bread crumb grain were studied, using subimages (32 by 32 pixel) and features computed for the slices with different threshold settings. A subsampling grid was located with respect to the axis of symmetry of a slice to provide identical topological subimage information. Different ranking techniques were applied to the databases. Statistical analysis was run on the database with digital image and breadmaking features. Several ranking algorithms and data visualization techniques were employed to create a sensitive scale for porosity patterns of bread crumb. There were significant linear correlations between machine vision extracted features and breadmaking parameters. Crumb grain scores by human experts were correlated more highly with some image features than with breadmaking parameters.
Using turbulence scintillation to assist object ranging from a single camera viewpoint.
Wu, Chensheng; Ko, Jonathan; Coffaro, Joseph; Paulson, Daniel A; Rzasa, John R; Andrews, Larry C; Phillips, Ronald L; Crabbs, Robert; Davis, Christopher C
2018-03-20
Image distortions caused by atmospheric turbulence are often treated as unwanted noise or errors in many image processing studies. Our study, however, shows that in certain scenarios the turbulence distortion can be very helpful in enhancing image processing results. This paper describes a novel approach that uses the scintillation traits recorded on a video clip to perform object ranging with reasonable accuracy from a single camera viewpoint. Conventionally, a single camera would be confused by the perspective viewing problem, where a large object far away looks the same as a small object close by. When the atmospheric turbulence phenomenon is considered, the edge or texture pixels of an object tend to scintillate and vary more with increased distance. This turbulence induced signature can be quantitatively analyzed to achieve object ranging with reasonable accuracy. Despite the inevitable fact that turbulence will cause random blurring and deformation of imaging results, it also offers convenient solutions to some remote sensing and machine vision problems, which would otherwise be difficult.
Object-based detection of vehicles using combined optical and elevation data
NASA Astrophysics Data System (ADS)
Schilling, Hendrik; Bulatov, Dimitri; Middelmann, Wolfgang
2018-02-01
The detection of vehicles is an important and challenging topic that is relevant for many applications. In this work, we present a workflow that utilizes optical and elevation data to detect vehicles in remotely sensed urban data. This workflow consists of three consecutive stages: candidate identification, classification, and single vehicle extraction. Unlike in most previous approaches, fusion of both data sources is strongly pursued at all stages. While the first stage utilizes the fact that most man-made objects are rectangular in shape, the second and third stages employ machine learning techniques combined with specific features. The stages are designed to handle multiple sensor input, which results in a significant improvement. A detailed evaluation shows the benefits of our workflow, which includes hand-tailored features; even in comparison with classification approaches based on Convolutional Neural Networks, which are state of the art in computer vision, we could obtain a comparable or superior performance (F1 score of 0.96-0.94).
Three-Dimensional Images For Robot Vision
NASA Astrophysics Data System (ADS)
McFarland, William D.
1983-12-01
Robots are attracting increased attention in the industrial productivity crisis. As one significant approach for this nation to maintain technological leadership, the need for robot vision has become critical. The "blind" robot, while occupying an economical niche at present is severely limited and job specific, being only one step up from the numerical controlled machines. To successfully satisfy robot vision requirements a three dimensional representation of a real scene must be provided. Several image acquistion techniques are discussed with more emphasis on the laser radar type instruments. The autonomous vehicle is also discussed as a robot form, and the requirements for these applications are considered. The total computer vision system requirement is reviewed with some discussion of the major techniques in the literature for three dimensional scene analysis.
Protyping machine vision software on the World Wide Web
NASA Astrophysics Data System (ADS)
Karantalis, George; Batchelor, Bruce G.
1998-10-01
Interactive image processing is a proven technique for analyzing industrial vision applications and building prototype systems. Several of the previous implementations have used dedicated hardware to perform the image processing, with a top layer of software providing a convenient user interface. More recently, self-contained software packages have been devised and these run on a standard computer. The advent of the Java programming language has made it possible to write platform-independent software, operating over the Internet, or a company-wide Intranet. Thus, there arises the possibility of designing at least some shop-floor inspection/control systems, without the vision engineer ever entering the factories where they will be used. It successful, this project will have a major impact on the productivity of vision systems designers.
A Review of Current Neuromorphic Approaches for Vision, Auditory, and Olfactory Sensors
Vanarse, Anup; Osseiran, Adam; Rassau, Alexander
2016-01-01
Conventional vision, auditory, and olfactory sensors generate large volumes of redundant data and as a result tend to consume excessive power. To address these shortcomings, neuromorphic sensors have been developed. These sensors mimic the neuro-biological architecture of sensory organs using aVLSI (analog Very Large Scale Integration) and generate asynchronous spiking output that represents sensing information in ways that are similar to neural signals. This allows for much lower power consumption due to an ability to extract useful sensory information from sparse captured data. The foundation for research in neuromorphic sensors was laid more than two decades ago, but recent developments in understanding of biological sensing and advanced electronics, have stimulated research on sophisticated neuromorphic sensors that provide numerous advantages over conventional sensors. In this paper, we review the current state-of-the-art in neuromorphic implementation of vision, auditory, and olfactory sensors and identify key contributions across these fields. Bringing together these key contributions we suggest a future research direction for further development of the neuromorphic sensing field. PMID:27065784
Keenan, S J; Diamond, J; McCluggage, W G; Bharucha, H; Thompson, D; Bartels, P H; Hamilton, P W
2000-11-01
The histological grading of cervical intraepithelial neoplasia (CIN) remains subjective, resulting in inter- and intra-observer variation and poor reproducibility in the grading of cervical lesions. This study has attempted to develop an objective grading system using automated machine vision. The architectural features of cervical squamous epithelium are quantitatively analysed using a combination of computerized digital image processing and Delaunay triangulation analysis; 230 images digitally captured from cases previously classified by a gynaecological pathologist included normal cervical squamous epithelium (n=30), koilocytosis (n=46), CIN 1 (n=52), CIN 2 (n=56), and CIN 3 (n=46). Intra- and inter-observer variation had kappa values of 0.502 and 0.415, respectively. A machine vision system was developed in KS400 macro programming language to segment and mark the centres of all nuclei within the epithelium. By object-oriented analysis of image components, the positional information of nuclei was used to construct a Delaunay triangulation mesh. Each mesh was analysed to compute triangle dimensions including the mean triangle area, the mean triangle edge length, and the number of triangles per unit area, giving an individual quantitative profile of measurements for each case. Discriminant analysis of the geometric data revealed the significant discriminatory variables from which a classification score was derived. The scoring system distinguished between normal and CIN 3 in 98.7% of cases and between koilocytosis and CIN 1 in 76.5% of cases, but only 62.3% of the CIN cases were classified into the correct group, with the CIN 2 group showing the highest rate of misclassification. Graphical plots of triangulation data demonstrated the continuum of morphological change from normal squamous epithelium to the highest grade of CIN, with overlapping of the groups originally defined by the pathologists. This study shows that automated location of nuclei in cervical biopsies using computerized image analysis is possible. Analysis of positional information enables quantitative evaluation of architectural features in CIN using Delaunay triangulation meshes, which is effective in the objective classification of CIN. This demonstrates the future potential of automated machine vision systems in diagnostic histopathology. Copyright 2000 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Belwanshi, Vinod; Topkar, Anita
2016-05-01
Finite element analysis study has been carried out to optimize the design parameters for bulk micro-machined silicon membranes for piezoresistive pressure sensing applications. The design is targeted for measurement of pressure up to 200 bar for nuclear reactor applications. The mechanical behavior of bulk micro-machined silicon membranes in terms of deflection and stress generation has been simulated. Based on the simulation results, optimization of the membrane design parameters in terms of length, width and thickness has been carried out. Subsequent to optimization of membrane geometrical parameters, the dimensions and location of the high stress concentration region for implantation of piezoresistors have been obtained for sensing of pressure using piezoresistive sensing technique.
Remote Sensing as a Demonstration of Applied Physics.
ERIC Educational Resources Information Center
Colwell, Robert N.
1980-01-01
Provides information about the field of remote sensing, including discussions of geo-synchronous and sun-synchronous remote-sensing platforms, the actual physical processes and equipment involved in sensing, the analysis of images by humans and machines, and inexpensive, small scale methods, including aerial photography. (CS)
NASA Astrophysics Data System (ADS)
As'ari, M. A.; Sheikh, U. U.
2012-04-01
The rapid development of intelligent assistive technology for replacing a human caregiver in assisting people with dementia performing activities of daily living (ADLs) promises in the reduction of care cost especially in training and hiring human caregiver. The main problem however, is the various kinds of sensing agents used in such system and is dependent on the intent (types of ADLs) and environment where the activity is performed. In this paper on overview of the potential of computer vision based sensing agent in assistive system and how it can be generalized and be invariant to various kind of ADLs and environment. We find that there exists a gap from the existing vision based human action recognition method in designing such system due to cognitive and physical impairment of people with dementia.
1988-04-30
side it necessary and Identify’ by’ block n~nmbot) haptic hand, touch , vision, robot, object recognition, categorization 20. AGSTRPACT (Continue an...established that the haptic system has remarkable capabilities for object recognition. We define haptics as purposive touch . The basic tactual system...gathered ratings of the importance of dimensions for categorizing common objects by touch . Texture and hardness ratings strongly co-vary, which is
UAV Research at NASA Langley: Towards Safe, Reliable, and Autonomous Operations
NASA Technical Reports Server (NTRS)
Davila, Carlos G.
2016-01-01
Unmanned Aerial Vehicles (UAV) are fundamental components in several aspects of research at NASA Langley, such as flight dynamics, mission-driven airframe design, airspace integration demonstrations, atmospheric science projects, and more. In particular, NASA Langley Research Center (Langley) is using UAVs to develop and demonstrate innovative capabilities that meet the autonomy and robotics challenges that are anticipated in science, space exploration, and aeronautics. These capabilities will enable new NASA missions such as asteroid rendezvous and retrieval (ARRM), Mars exploration, in-situ resource utilization (ISRU), pollution measurements in historically inaccessible areas, and the integration of UAVs into our everyday lives all missions of increasing complexity, distance, pace, and/or accessibility. Building on decades of NASA experience and success in the design, fabrication, and integration of robust and reliable automated systems for space and aeronautics, Langley Autonomy Incubator seeks to bridge the gap between automation and autonomy by enabling safe autonomous operations via onboard sensing and perception systems in both data-rich and data-deprived environments. The Autonomy Incubator is focused on the challenge of mobility and manipulation in dynamic and unstructured environments by integrating technologies such as computer vision, visual odometry, real-time mapping, path planning, object detection and avoidance, object classification, adaptive control, sensor fusion, machine learning, and natural human-machine teaming. These technologies are implemented in an architectural framework developed in-house for easy integration and interoperability of cutting-edge hardware and software.
Mihailidis, Alex; Carmichael, Brent; Boger, Jennifer
2004-09-01
This paper discusses the use of computer vision in pervasive healthcare systems, specifically in the design of a sensing agent for an intelligent environment that assists older adults with dementia during an activity of daily living. An overview of the techniques applied in this particular example is provided, along with results from preliminary trials completed using the new sensing agent. A discussion of the results obtained to date is presented, including technical and social issues that remain for the advancement and acceptance of this type of technology within pervasive healthcare.
Cutting tool form compensation system and method
Barkman, W.E.; Babelay, E.F. Jr.; Klages, E.J.
1993-10-19
A compensation system for a computer-controlled machining apparatus having a controller and including a cutting tool and a workpiece holder which are movable relative to one another along a preprogrammed path during a machining operation utilizes a camera and a vision computer for gathering information at a preselected stage of a machining operation relating to the actual shape and size of the cutting edge of the cutting tool and for altering the preprogrammed path in accordance with detected variations between the actual size and shape of the cutting edge and an assumed size and shape of the cutting edge. The camera obtains an image of the cutting tool against a background so that the cutting tool and background possess contrasting light intensities, and the vision computer utilizes the contrasting light intensities of the image to locate points therein which correspond to points along the actual cutting edge. Following a series of computations involving the determining of a tool center from the points identified along the tool edge, the results of the computations are fed to the controller where the preprogrammed path is altered as aforedescribed. 9 figures.
Cutting tool form compensaton system and method
Barkman, William E.; Babelay, Jr., Edwin F.; Klages, Edward J.
1993-01-01
A compensation system for a computer-controlled machining apparatus having a controller and including a cutting tool and a workpiece holder which are movable relative to one another along a preprogrammed path during a machining operation utilizes a camera and a vision computer for gathering information at a preselected stage of a machining operation relating to the actual shape and size of the cutting edge of the cutting tool and for altering the preprogrammed path in accordance with detected variations between the actual size and shape of the cutting edge and an assumed size and shape of the cutting edge. The camera obtains an image of the cutting tool against a background so that the cutting tool and background possess contrasting light intensities, and the vision computer utilizes the contrasting light intensities of the image to locate points therein which correspond to points along the actual cutting edge. Following a series of computations involving the determining of a tool center from the points identified along the tool edge, the results of the computations are fed to the controller where the preprogrammed path is altered as aforedescribed.
Galaxy morphology - An unsupervised machine learning approach
NASA Astrophysics Data System (ADS)
Schutter, A.; Shamir, L.
2015-09-01
Structural properties poses valuable information about the formation and evolution of galaxies, and are important for understanding the past, present, and future universe. Here we use unsupervised machine learning methodology to analyze a network of similarities between galaxy morphological types, and automatically deduce a morphological sequence of galaxies. Application of the method to the EFIGI catalog show that the morphological scheme produced by the algorithm is largely in agreement with the De Vaucouleurs system, demonstrating the ability of computer vision and machine learning methods to automatically profile galaxy morphological sequences. The unsupervised analysis method is based on comprehensive computer vision techniques that compute the visual similarities between the different morphological types. Rather than relying on human cognition, the proposed system deduces the similarities between sets of galaxy images in an automatic manner, and is therefore not limited by the number of galaxies being analyzed. The source code of the method is publicly available, and the protocol of the experiment is included in the paper so that the experiment can be replicated, and the method can be used to analyze user-defined datasets of galaxy images.
Machine vision inspection of railroad track
DOT National Transportation Integrated Search
2011-01-10
North American Railways and the United States Department of Transportation : (US DOT) Federal Railroad Administration (FRA) require periodic inspection of railway : infrastructure to ensure the safety of railway operation. This inspection is a critic...
Technology for robotic surface inspection in space
NASA Technical Reports Server (NTRS)
Volpe, Richard; Balaram, J.
1994-01-01
This paper presents on-going research in robotic inspection of space platforms. Three main areas of investigation are discussed: machine vision inspection techniques, an integrated sensor end-effector, and an orbital environment laboratory simulation. Machine vision inspection utilizes automatic comparison of new and reference images to detect on-orbit induced damage such as micrometeorite impacts. The cameras and lighting used for this inspection are housed in a multisensor end-effector, which also contains a suite of sensors for detection of temperature, gas leaks, proximity, and forces. To fully test all of these sensors, a realistic space platform mock-up has been created, complete with visual, temperature, and gas anomalies. Further, changing orbital lighting conditions are effectively mimicked by a robotic solar simulator. In the paper, each of these technology components will be discussed, and experimental results are provided.
INFIBRA: machine vision inspection of acrylic fiber production
NASA Astrophysics Data System (ADS)
Davies, Roger; Correia, Bento A. B.; Contreiras, Jose; Carvalho, Fernando D.
1998-10-01
This paper describes the implementation of INFIBRA, a machine vision system for the inspection of acrylic fiber production lines. The system was developed by INETI under a contract from Fisipe, Fibras Sinteticas de Portugal, S.A. At Fisipe there are ten production lines in continuous operation, each approximately 40 m in length. A team of operators used to perform periodic manual visual inspection of each line in conditions of high ambient temperature and humidity. It is not surprising that failures in the manual inspection process occurred with some frequency, with consequences that ranged from reduced fiber quality to production stoppages. The INFIBRA system architecture is a specialization of a generic, modular machine vision architecture based on a network of Personal Computers (PCs), each equipped with a low cost frame grabber. Each production line has a dedicated PC that performs automatic inspection, using specially designed metrology algorithms, via four video cameras located at key positions on the line. The cameras are mounted inside custom-built, hermetically sealed water-cooled housings to protect them from the unfriendly environment. The ten PCs, one for each production line, communicate with a central PC via a standard Ethernet connection. The operator controls all aspects of the inspection process, from configuration through to handling alarms, via a simple graphical interface on the central PC. At any time the operator can also view on the central PC's screen the live image from any one of the 40 cameras employed by the system.
Feature extraction algorithm for space targets based on fractal theory
NASA Astrophysics Data System (ADS)
Tian, Balin; Yuan, Jianping; Yue, Xiaokui; Ning, Xin
2007-11-01
In order to offer a potential for extending the life of satellites and reducing the launch and operating costs, satellite servicing including conducting repairs, upgrading and refueling spacecraft on-orbit become much more frequently. Future space operations can be more economically and reliably executed using machine vision systems, which can meet real time and tracking reliability requirements for image tracking of space surveillance system. Machine vision was applied to the research of relative pose for spacecrafts, the feature extraction algorithm was the basis of relative pose. In this paper fractal geometry based edge extraction algorithm which can be used in determining and tracking the relative pose of an observed satellite during proximity operations in machine vision system was presented. The method gets the gray-level image distributed by fractal dimension used the Differential Box-Counting (DBC) approach of the fractal theory to restrain the noise. After this, we detect the consecutive edge using Mathematical Morphology. The validity of the proposed method is examined by processing and analyzing images of space targets. The edge extraction method not only extracts the outline of the target, but also keeps the inner details. Meanwhile, edge extraction is only processed in moving area to reduce computation greatly. Simulation results compared edge detection using the method which presented by us with other detection methods. The results indicate that the presented algorithm is a valid method to solve the problems of relative pose for spacecrafts.
NASA Astrophysics Data System (ADS)
Johnson, Bradley; May, Gayle L.; Korn, Paula
The present conference discusses the currently envisioned goals of human-machine systems in spacecraft environments, prospects for human exploration of the solar system, and plausible methods for meeting human needs in space. Also discussed are the problems of human-machine interaction in long-duration space flights, remote medical systems for space exploration, the use of virtual reality for planetary exploration, the alliance between U.S. Antarctic and space programs, and the economic and educational impacts of the U.S. space program.
Fast and robust generation of feature maps for region-based visual attention.
Aziz, Muhammad Zaheer; Mertsching, Bärbel
2008-05-01
Visual attention is one of the important phenomena in biological vision which can be followed to achieve more efficiency, intelligence, and robustness in artificial vision systems. This paper investigates a region-based approach that performs pixel clustering prior to the processes of attention in contrast to late clustering as done by contemporary methods. The foundation steps of feature map construction for the region-based attention model are proposed here. The color contrast map is generated based upon the extended findings from the color theory, the symmetry map is constructed using a novel scanning-based method, and a new algorithm is proposed to compute a size contrast map as a formal feature channel. Eccentricity and orientation are computed using the moments of obtained regions and then saliency is evaluated using the rarity criteria. The efficient design of the proposed algorithms allows incorporating five feature channels while maintaining a processing rate of multiple frames per second. Another salient advantage over the existing techniques is the reusability of the salient regions in the high-level machine vision procedures due to preservation of their shapes and precise locations. The results indicate that the proposed model has the potential to efficiently integrate the phenomenon of attention into the main stream of machine vision and systems with restricted computing resources such as mobile robots can benefit from its advantages.
Development of a machine vision system for automated structural assembly
NASA Technical Reports Server (NTRS)
Sydow, P. Daniel; Cooper, Eric G.
1992-01-01
Research is being conducted at the LaRC to develop a telerobotic assembly system designed to construct large space truss structures. This research program was initiated within the past several years, and a ground-based test-bed was developed to evaluate and expand the state of the art. Test-bed operations currently use predetermined ('taught') points for truss structural assembly. Total dependence on the use of taught points for joint receptacle capture and strut installation is neither robust nor reliable enough for space operations. Therefore, a machine vision sensor guidance system is being developed to locate and guide the robot to a passive target mounted on the truss joint receptacle. The vision system hardware includes a miniature video camera, passive targets mounted on the joint receptacles, target illumination hardware, and an image processing system. Discrimination of the target from background clutter is accomplished through standard digital processing techniques. Once the target is identified, a pose estimation algorithm is invoked to determine the location, in three-dimensional space, of the target relative to the robots end-effector. Preliminary test results of the vision system in the Automated Structural Assembly Laboratory with a range of lighting and background conditions indicate that it is fully capable of successfully identifying joint receptacle targets throughout the required operational range. Controlled optical bench test results indicate that the system can also provide the pose estimation accuracy to define the target position.
NASA Astrophysics Data System (ADS)
Gonulalan, Cansu
In recent years, there has been an increasing demand for applications to monitor the targets related to land-use, using remote sensing images. Advances in remote sensing satellites give rise to the research in this area. Many applications ranging from urban growth planning to homeland security have already used the algorithms for automated object recognition from remote sensing imagery. However, they have still problems such as low accuracy on detection of targets, specific algorithms for a specific area etc. In this thesis, we focus on an automatic approach to classify and detect building foot-prints, road networks and vegetation areas. The automatic interpretation of visual data is a comprehensive task in computer vision field. The machine learning approaches improve the capability of classification in an intelligent way. We propose a method, which has high accuracy on detection and classification. The multi class classification is developed for detecting multiple objects. We present an AdaBoost-based approach along with the supervised learning algorithm. The combi- nation of AdaBoost with "Attentional Cascade" is adopted from Viola and Jones [1]. This combination decreases the computation time and gives opportunity to real time applications. For the feature extraction step, our contribution is to combine Haar-like features that include corner, rectangle and Gabor. Among all features, AdaBoost selects only critical features and generates in extremely efficient cascade structured classifier. Finally, we present and evaluate our experimental results. The overall system is tested and high performance of detection is achieved. The precision rate of the final multi-class classifier is over 98%.
A.M. Skeffington: the father of behavioral optometry--his contributions
NASA Astrophysics Data System (ADS)
Maples, Willis C.
1998-10-01
Life of Dr. A. M. Skeffington, his model of vision, and his contributions to optometry are reviewed. In particular, vision as s spatial information processing system and dual sensing ocular system are discussed to answer the questions: `where is the object in space?' and `what is the object in space?'.
NASA Astrophysics Data System (ADS)
Johnson, Bradley; May, Gayle L.; Korn, Paula
A recent symposium produced papers in the areas of solar system exploration, man machine interfaces, cybernetics, virtual reality, telerobotics, life support systems and the scientific and technology spinoff from the NASA space program. A number of papers also addressed the social and economic impacts of the space program. For individual titles, see A95-87468 through A95-87479.
Computer vision in roadway transportation systems: a survey
NASA Astrophysics Data System (ADS)
Loce, Robert P.; Bernal, Edgar A.; Wu, Wencheng; Bala, Raja
2013-10-01
There is a worldwide effort to apply 21st century intelligence to evolving our transportation networks. The goals of smart transportation networks are quite noble and manifold, including safety, efficiency, law enforcement, energy conservation, and emission reduction. Computer vision is playing a key role in this transportation evolution. Video imaging scientists are providing intelligent sensing and processing technologies for a wide variety of applications and services. There are many interesting technical challenges including imaging under a variety of environmental and illumination conditions, data overload, recognition and tracking of objects at high speed, distributed network sensing and processing, energy sources, as well as legal concerns. This paper presents a survey of computer vision techniques related to three key problems in the transportation domain: safety, efficiency, and security and law enforcement. A broad review of the literature is complemented by detailed treatment of a few selected algorithms and systems that the authors believe represent the state-of-the-art.
Object as a model of intelligent robot in the virtual workspace
NASA Astrophysics Data System (ADS)
Foit, K.; Gwiazda, A.; Banas, W.; Sekala, A.; Hryniewicz, P.
2015-11-01
The contemporary industry requires that every element of a production line will fit into the global schema, which is connected with the global structure of business. There is the need to find the practical and effective ways of the design and management of the production process. The term “effective” should be understood in a manner that there exists a method, which allows building a system of nodes and relations in order to describe the role of the particular machine in the production process. Among all the machines involved in the manufacturing process, industrial robots are the most complex ones. This complexity is reflected in the realization of elaborated tasks, involving handling, transporting or orienting the objects in a work space, and even performing simple machining processes, such as deburring, grinding, painting, applying adhesives and sealants etc. The robot also performs some activities connected with automatic tool changing and operating the equipment mounted on the wrist of the robot. Because of having the programmable control system, the robot also performs additional activities connected with sensors, vision systems, operating the storages of manipulated objects, tools or grippers, measuring stands, etc. For this reason the description of the robot as a part of production system should take into account the specific nature of this machine: the robot is a substitute of a worker, who performs his tasks in a particular environment. In this case, the model should be able to characterize the essence of "employment" in the sufficient way. One of the possible approaches to this problem is to treat the robot as an object, in the sense often used in computer science. This allows both: to describe certain operations performed on the object, as well as describing the operations performed by the object. This paper focuses mainly on the definition of the object as the model of the robot. This model is confronted with the other possible descriptions. The results can be further used during designing of the complete manufacturing system, which takes into account all the involved machines and has the form of an object-oriented model.
Real-time registration of video with ultrasound using stereo disparity
NASA Astrophysics Data System (ADS)
Wang, Jihang; Horvath, Samantha; Stetten, George; Siegel, Mel; Galeotti, John
2012-02-01
Medical ultrasound typically deals with the interior of the patient, with the exterior left to the original medical imaging modality, direct human vision. For the human operator scanning the patient, the view of the external anatomy is essential for correctly locating the ultrasound probe on the body and making sense of the resulting ultrasound images in their proper anatomical context. The operator, after all, is not expected to perform the scan with his eyes shut. Over the past decade, our laboratory has developed a method of fusing these two information streams in the mind of the operator, the Sonic Flashlight, which uses a half silvered mirror and miniature display mounted on an ultrasound probe to produce a virtual image within the patient at its proper location. We are now interested in developing a similar data fusion approach within the ultrasound machine itself, by, in effect, giving vision to the transducer. Our embodiment of this concept consists of an ultrasound probe with two small video cameras mounted on it, with software capable of locating the surface of an ultrasound phantom using stereo disparity between the two video images. We report its first successful operation, demonstrating a 3D rendering of the phantom's surface with the ultrasound data superimposed at its correct relative location. Eventually, automated analysis of these registered data sets may permit the scanner and its associated computational apparatus to interpret the ultrasound data within its anatomical context, much as the human operator does today.
Intermediate grouping on remotely sensed data using Gestalt algebra
NASA Astrophysics Data System (ADS)
Michaelsen, Eckart
2014-10-01
Human observers often achieve striking recognition performance on remotely sensed data unmatched by machine vision algorithms. This holds even for thermal images (IR) or synthetic aperture radar (SAR). Psychologists refer to these capabilities as Gestalt perceptive skills. Gestalt Algebra is a mathematical structure recently proposed for such laws of perceptual grouping. It gives operations for mirror symmetry, continuation in rows and rotational symmetric patterns. Each of these operations forms an aggregate-Gestalt of a tuple of part-Gestalten. Each Gestalt is attributed with a position, an orientation, a rotational frequency, a scale, and an assessment respectively. Any Gestalt can be combined with any other Gestalt using any of the three operations. Most often the assessment of the new aggregate-Gestalt will be close to zero. Only if the part-Gestalten perfectly fit into the desired pattern the new aggregate-Gestalt will be assessed with value one. The algebra is suitable in both directions: It may render an organized symmetric mandala using random numbers. Or it may recognize deep hidden visual relationships between meaningful parts of a picture. For the latter primitives must be obtained from the image by some key-point detector and a threshold. Intelligent search strategies are required for this search in the combinatorial space of possible Gestalt Algebra terms. Exemplarily, maximal assessed Gestalten found in selected aerial images as well as in IR and SAR images are presented.
NASA Astrophysics Data System (ADS)
Collins Petersen, Carolyn; Brandt, John C.
2003-11-01
Introduction; 1. Eyes in the sky; 2. Telescopes: multi-frequency time machines; 3. Planets on a pixel; 4. The lives of stars; 5. Galaxies - tales of stellar cities; 6. The once and future universe; 7. Stargazing - the next generation; Glossary.
A real-time surface inspection system for precision steel balls based on machine vision
NASA Astrophysics Data System (ADS)
Chen, Yi-Ji; Tsai, Jhy-Cherng; Hsu, Ya-Chen
2016-07-01
Precision steel balls are one of the most fundament components for motion and power transmission parts and they are widely used in industrial machinery and the automotive industry. As precision balls are crucial for the quality of these products, there is an urgent need to develop a fast and robust system for inspecting defects of precision steel balls. In this paper, a real-time system for inspecting surface defects of precision steel balls is developed based on machine vision. The developed system integrates a dual-lighting system, an unfolding mechanism and inspection algorithms for real-time signal processing and defect detection. The developed system is tested under feeding speeds of 4 pcs s-1 with a detection rate of 99.94% and an error rate of 0.10%. The minimum detectable surface flaw area is 0.01 mm2, which meets the requirement for inspecting ISO grade 100 precision steel balls.
Valdés-Mas, M A; Martín-Guerrero, J D; Rupérez, M J; Pastor, F; Dualde, C; Monserrat, C; Peris-Martínez, C
2014-08-01
Keratoconus (KC) is the most common type of corneal ectasia. A corneal transplantation was the treatment of choice until the last decade. However, intra-corneal ring implantation has become more and more common, and it is commonly used to treat KC thus avoiding a corneal transplantation. This work proposes a new approach based on Machine Learning to predict the vision gain of KC patients after ring implantation. That vision gain is assessed by means of the corneal curvature and the astigmatism. Different models were proposed; the best results were achieved by an artificial neural network based on the Multilayer Perceptron. The error provided by the best model was 0.97D of corneal curvature and 0.93D of astigmatism. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
A survey of camera error sources in machine vision systems
NASA Astrophysics Data System (ADS)
Jatko, W. B.
In machine vision applications, such as an automated inspection line, television cameras are commonly used to record scene intensity in a computer memory or frame buffer. Scene data from the image sensor can then be analyzed with a wide variety of feature-detection techniques. Many algorithms found in textbooks on image processing make the implicit simplifying assumption of an ideal input image with clearly defined edges and uniform illumination. The ideal image model is helpful to aid the student in understanding the principles of operation, but when these algorithms are blindly applied to real-world images the results can be unsatisfactory. This paper examines some common measurement errors found in camera sensors and their underlying causes, and possible methods of error compensation. The role of the camera in a typical image-processing system is discussed, with emphasis on the origination of signal distortions. The effects of such things as lighting, optics, and sensor characteristics are considered.
Fragrant pear sexuality recognition with machine vision
NASA Astrophysics Data System (ADS)
Ma, Benxue; Ying, Yibin
2006-10-01
In this research, a method to identify Kuler fragrant pear's sexuality with machine vision was developed. Kuler fragrant pear has male pear and female pear. They have an obvious difference in favor. To detect the sexuality of Kuler fragrant pear, images of fragrant pear were acquired by CCD color camera. Before feature extraction, some preprocessing is conducted on the acquired images to remove noise and unnecessary contents. Color feature, perimeter feature and area feature of fragrant pear bottom image were extracted by digital image processing technique. And the fragrant pear sexuality was determined by complexity obtained from perimeter and area. In this research, using 128 Kurle fragrant pears as samples, good recognition rate between the male pear and the female pear was obtained for Kurle pear's sexuality detection (82.8%). Result shows this method could detect male pear and female pear with a good accuracy.
Nondestructive Detection of the Internalquality of Apple Using X-Ray and Machine Vision
NASA Astrophysics Data System (ADS)
Yang, Fuzeng; Yang, Liangliang; Yang, Qing; Kang, Likui
The internal quality of apple is impossible to be detected by eyes in the procedure of sorting, which could reduce the apple’s quality reaching market. This paper illustrates an instrument using X-ray and machine vision. The following steps were introduced to process the X-ray image in order to determine the mould core apple. Firstly, lifting wavelet transform was used to get a low frequency image and three high frequency images. Secondly, we enhanced the low frequency image through image’s histogram equalization. Then, the edge of each apple's image was detected using canny operator. Finally, a threshold was set to clarify mould core and normal apple according to the different length of the apple core’s diameter. The experimental results show that this method could on-line detect the mould core apple with less time consuming, less than 0.03 seconds per apple, and the accuracy could reach 92%.
Computational Analysis of Behavior.
Egnor, S E Roian; Branson, Kristin
2016-07-08
In this review, we discuss the emerging field of computational behavioral analysis-the use of modern methods from computer science and engineering to quantitatively measure animal behavior. We discuss aspects of experiment design important to both obtaining biologically relevant behavioral data and enabling the use of machine vision and learning techniques for automation. These two goals are often in conflict. Restraining or restricting the environment of the animal can simplify automatic behavior quantification, but it can also degrade the quality or alter important aspects of behavior. To enable biologists to design experiments to obtain better behavioral measurements, and computer scientists to pinpoint fruitful directions for algorithm improvement, we review known effects of artificial manipulation of the animal on behavior. We also review machine vision and learning techniques for tracking, feature extraction, automated behavior classification, and automated behavior discovery, the assumptions they make, and the types of data they work best with.
Machine vision application in animal trajectory tracking.
Koniar, Dušan; Hargaš, Libor; Loncová, Zuzana; Duchoň, František; Beňo, Peter
2016-04-01
This article was motivated by the doctors' demand to make a technical support in pathologies of gastrointestinal tract research [10], which would be based on machine vision tools. Proposed solution should be less expensive alternative to already existing RF (radio frequency) methods. The objective of whole experiment was to evaluate the amount of animal motion dependent on degree of pathology (gastric ulcer). In the theoretical part of the article, several methods of animal trajectory tracking are presented: two differential methods based on background subtraction, the thresholding methods based on global and local threshold and the last method used for animal tracking was the color matching with a chosen template containing a searched spectrum of colors. The methods were tested offline on five video samples. Each sample contained situation with moving guinea pig locked in a cage under various lighting conditions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
2004-02-20
KENNEDY SPACE CENTER, FLA. - Garland V. Stewart Magnet Middle School, a NASA Explorer School (NES) in Tampa, Fla., is the site where Center Director Jim Kennedy and astronaut Kay Hire shared the agency’s new vision for space exploration with the next generation of explorers. Kennedy talked with students about our destiny as explorers, NASA’s stepping stone approach to exploring Earth, the Moon, Mars and beyond, how space impacts our lives, and how people and machines rely on each other in space.
How the Air Force Should Stay Engaged in Computer Vision Technology Development
2007-04-01
present individuals. The survey 29 Paddy Comyn, "Sensing Forward to a Driverless Future," The Irish...34 Military Embedded Systems (2006). Comyn, Paddy. "Sensing Forward to a Driverless Future." The Irish Times 21 February 2007. Dakley, Norman C. The
Non-visual crypsis: a review of the empirical evidence for camouflage to senses other than vision.
Ruxton, Graeme D
2009-02-27
I review the evidence that organisms have adaptations that confer difficulty of detection by predators and parasites that seek their targets primarily using sensory systems other than vision. In other words, I will answer the question of whether crypsis is a concept that can usefully be applied to non-visual sensory perception. Probably because vision is such an important sensory system in humans, research in this field is sparse. Thus, at present we have very few examples of chemical camouflage, and even these contain some ambiguity in deciding whether they are best seen as examples of background matching or mimicry. There are many examples of organisms that are adaptively silent at times or in locations when or where predation risk is higher or in response to detection of a predator. By contrast, evidence that the form (rather than use) of vocalizations and other sound-based signals has been influenced by issues of reducing detectability to unintended receivers is suggestive rather than conclusive. There is again suggestive but not completely conclusive evidence for crypsis against electro-sensing predators. Lastly, mechanoreception is highly understudied in this regard, but there are scattered reports that strongly suggest that some species can be thought of as being adapted to be cryptic in this modality. Hence, I conclude that crypsis is a concept that can usefully be applied to senses other than vision, and that this is a field very much worthy of more investigation.
Sensing the Urgency: Envisioning a Black Humanist Vision of Care in Teacher Education
ERIC Educational Resources Information Center
Knight, Michelle G.
2004-01-01
This article builds on the growing body of research on preparing African-American pre-service teachers for culturally diverse urban environments. I specifically focus on a two year qualitative case study of Amy, an African-American pre-service teacher, to highlight five themes of a Black humanist vision of care. These themes emphasize the…
Cultivating "Una Persona Educada: A Sentipensante" (Sensing/Thinking) Vision of Education
ERIC Educational Resources Information Center
Rendon, Laura I.
2011-01-01
This essay focuses on the need to educate the new "persona educada", a dignified, honorable person with a good measure of social and personal responsibility who also possesses the habits of the mind and heart. To cultivate "una persona educada" requires a newly formed vision of education and pedagogy. Examples of three entrenched agreements that…
The Practical Aspects of the Visual Act in Studying.
ERIC Educational Resources Information Center
Robinson, Bernard N.
This paper explores the role of vision in reading and studying. "How Vision Occurs" discusses the functions of rods, cones, the cerebral cortex, and the receptor senses. "The Optometrist and the Educator" views the role of the optometrist in relation to student learning. "Eye Anatomy Terms" defines the ciliary muscle, optic disc and nerve, macula…
A Shared Vision of Human Excellence: Confucian Spirituality and Arts Education
ERIC Educational Resources Information Center
Tan, Charlene; Tan, Leonard
2016-01-01
Spirituality encourages the individual to make sense of oneself within a wider framework of meaning and see oneself as part of some larger whole. This article discusses Confucian spirituality by focusing on the spiritual ideals of "dao" (Way) and "he" (harmony). It is explained that the Way represents a shared vision of human…
NASA Technical Reports Server (NTRS)
Schenker, Paul S. (Editor)
1990-01-01
Various papers on human and machine strategies in sensor fusion are presented. The general topics addressed include: active vision, measurement and analysis of visual motion, decision models for sensor fusion, implementation of sensor fusion algorithms, applying sensor fusion to image analysis, perceptual modules and their fusion, perceptual organization and object recognition, planning and the integration of high-level knowledge with perception, using prior knowledge and context in sensor fusion.
NASA Astrophysics Data System (ADS)
Wang, Dongyi; Vinson, Robert; Holmes, Maxwell; Seibel, Gary; Tao, Yang
2018-04-01
The Atlantic blue crab is among the highest-valued seafood found in the American Eastern Seaboard. Currently, the crab processing industry is highly dependent on manual labor. However, there is great potential for vision-guided intelligent machines to automate the meat picking process. Studies show that the back-fin knuckles are robust features containing information about a crab's size, orientation, and the position of the crab's meat compartments. Our studies also make it clear that detecting the knuckles reliably in images is challenging due to the knuckle's small size, anomalous shape, and similarity to joints in the legs and claws. An accurate and reliable computer vision algorithm was proposed to detect the crab's back-fin knuckles in digital images. Convolutional neural networks (CNNs) can localize rough knuckle positions with 97.67% accuracy, transforming a global detection problem into a local detection problem. Compared to the rough localization based on human experience or other machine learning classification methods, the CNN shows the best localization results. In the rough knuckle position, a k-means clustering method is able to further extract the exact knuckle positions based on the back-fin knuckle color features. The exact knuckle position can help us to generate a crab cutline in XY plane using a template matching method. This is a pioneering research project in crab image analysis and offers advanced machine intelligence for automated crab processing.
Wire connector classification with machine vision and a novel hybrid SVM
NASA Astrophysics Data System (ADS)
Chauhan, Vedang; Joshi, Keyur D.; Surgenor, Brian W.
2018-04-01
A machine vision-based system has been developed and tested that uses a novel hybrid Support Vector Machine (SVM) in a part inspection application with clear plastic wire connectors. The application required the system to differentiate between 4 different known styles of connectors plus one unknown style, for a total of 5 classes. The requirement to handle an unknown class is what necessitated the hybrid approach. The system was trained with the 4 known classes and tested with 5 classes (the 4 known plus the 1 unknown). The hybrid classification approach used two layers of SVMs: one layer was semi-supervised and the other layer was supervised. The semi-supervised SVM was a special case of unsupervised machine learning that classified test images as one of the 4 known classes (to accept) or as the unknown class (to reject). The supervised SVM classified test images as one of the 4 known classes and consequently would give false positives (FPs). Two methods were tested. The difference between the methods was that the order of the layers was switched. The method with the semi-supervised layer first gave an accuracy of 80% with 20% FPs. The method with the supervised layer first gave an accuracy of 98% with 0% FPs. Further work is being conducted to see if the hybrid approach works with other applications that have an unknown class requirement.
Design and Development of a High Speed Sorting System Based on Machine Vision Guiding
NASA Astrophysics Data System (ADS)
Zhang, Wenchang; Mei, Jiangping; Ding, Yabin
In this paper, a vision-based control strategy to perform high speed pick-and-place tasks on automation product line is proposed, and relevant control software is develop. Using Delta robot to control a sucker to grasp disordered objects from one moving conveyer and then place them on the other in order. CCD camera gets one picture every time the conveyer moves a distance of ds. Objects position and shape are got after image processing. Target tracking method based on "Servo motor + synchronous conveyer" is used to fulfill the high speed porting operation real time. Experiments conducted on Delta robot sorting system demonstrate the efficiency and validity of the proposed vision-control strategy.
No Evidence of Narrowly Defined Cognitive Penetrability in Unambiguous Vision
Lammers, Nikki A.; de Haan, Edward H.; Pinto, Yair
2017-01-01
The classical notion of cognitive impenetrability suggests that perceptual processing is an automatic modular system and not under conscious control. Near consensus is now emerging that this classical notion is untenable. However, as recently pointed out by Firestone and Scholl, this consensus is built on quicksand. In most studies claiming perception is cognitively penetrable, it remains unclear which actual process has been affected (perception, memory, imagery, input selection or judgment). In fact, the only available “proofs” for cognitive penetrability are proxies for perception, such as behavioral responses and neural correlates. We suggest that one can interpret cognitive penetrability in two different ways, a broad sense and a narrow sense. In the broad sense, attention and memory are not considered as “just” pre- and post-perceptual systems but as part of the mechanisms by which top-down processes influence the actual percept. Although many studies have proven top-down influences in this broader sense, it is still debatable whether cognitive penetrability remains tenable in a narrow sense. The narrow sense states that cognitive penetrability only occurs when top-down factors are flexible and cause a clear illusion from a first person perspective. So far, there is no strong evidence from a first person perspective that visual illusions can indeed be driven by high-level flexible factors. One cannot be cognitively trained to see and unsee visual illusions. We argue that this lack of convincing proof for cognitive penetrability in the narrow sense can be explained by the fact that most research focuses on foveal vision only. This type of perception may be too unambiguous for transient high-level factors to control perception. Therefore, illusions in more ambiguous perception, such as peripheral vision, can offer a unique insight into the matter. They produce a clear subjective percept based on unclear, degraded visual input: the optimal basis to study narrowly defined cognitive penetrability. PMID:28740471
Vision Based Autonomous Robotic Control for Advanced Inspection and Repair
NASA Technical Reports Server (NTRS)
Wehner, Walter S.
2014-01-01
The advanced inspection system is an autonomous control and analysis system that improves the inspection and remediation operations for ground and surface systems. It uses optical imaging technology with intelligent computer vision algorithms to analyze physical features of the real-world environment to make decisions and learn from experience. The advanced inspection system plans to control a robotic manipulator arm, an unmanned ground vehicle and cameras remotely, automatically and autonomously. There are many computer vision, image processing and machine learning techniques available as open source for using vision as a sensory feedback in decision-making and autonomous robotic movement. My responsibilities for the advanced inspection system are to create a software architecture that integrates and provides a framework for all the different subsystem components; identify open-source algorithms and techniques; and integrate robot hardware.
NASA Astrophysics Data System (ADS)
Qin, M.; Wan, X.; Shao, Y. Y.; Li, S. Y.
2018-04-01
Vision-based navigation has become an attractive solution for autonomous navigation for planetary exploration. This paper presents our work of designing and building an autonomous vision-based GPS-denied unmanned vehicle and developing an ARFM (Adaptive Robust Feature Matching) based VO (Visual Odometry) software for its autonomous navigation. The hardware system is mainly composed of binocular stereo camera, a pan-and tilt, a master machine, a tracked chassis. And the ARFM-based VO software system contains four modules: camera calibration, ARFM-based 3D reconstruction, position and attitude calculation, BA (Bundle Adjustment) modules. Two VO experiments were carried out using both outdoor images from open dataset and indoor images captured by our vehicle, the results demonstrate that our vision-based unmanned vehicle is able to achieve autonomous localization and has the potential for future planetary exploration.
Method and system for controlling start of a permanent magnet machine
Walters, James E.; Krefta, Ronald John
2003-10-28
Method and system for controlling a permanent magnet machine are provided. The method provides a sensor assembly for sensing rotor sector position relative to a plurality of angular sectors. The method further provides a sensor for sensing angular increments in rotor position. The method allows starting the machine in a brushless direct current mode of operation using a calculated initial rotor position based on an initial angular sector position information from the sensor assembly. Upon determining a transition from the initial angular sector to the next angular sector, the method allows switching to a sinusoidal mode of operation using rotor position based on rotor position information from the incremental sensor.
Machine-Vision Aids for Improved Flight Operations
NASA Technical Reports Server (NTRS)
Menon, P. K.; Chatterji, Gano B.
1996-01-01
The development of machine vision based pilot aids to help reduce night approach and landing accidents is explored. The techniques developed are motivated by the desire to use the available information sources for navigation such as the airport lighting layout, attitude sensors and Global Positioning System to derive more precise aircraft position and orientation information. The fact that airport lighting geometry is known and that images of airport lighting can be acquired by the camera, has lead to the synthesis of machine vision based algorithms for runway relative aircraft position and orientation estimation. The main contribution of this research is the synthesis of seven navigation algorithms based on two broad families of solutions. The first family of solution methods consists of techniques that reconstruct the airport lighting layout from the camera image and then estimate the aircraft position components by comparing the reconstructed lighting layout geometry with the known model of the airport lighting layout geometry. The second family of methods comprises techniques that synthesize the image of the airport lighting layout using a camera model and estimate the aircraft position and orientation by comparing this image with the actual image of the airport lighting acquired by the camera. Algorithms 1 through 4 belong to the first family of solutions while Algorithms 5 through 7 belong to the second family of solutions. Algorithms 1 and 2 are parameter optimization methods, Algorithms 3 and 4 are feature correspondence methods and Algorithms 5 through 7 are Kalman filter centered algorithms. Results of computer simulation are presented to demonstrate the performance of all the seven algorithms developed.
Machine learning in genetics and genomics
Libbrecht, Maxwell W.; Noble, William Stafford
2016-01-01
The field of machine learning promises to enable computers to assist humans in making sense of large, complex data sets. In this review, we outline some of the main applications of machine learning to genetic and genomic data. In the process, we identify some recurrent challenges associated with this type of analysis and provide general guidelines to assist in the practical application of machine learning to real genetic and genomic data. PMID:25948244
NASA Astrophysics Data System (ADS)
Digney, Bruce L.
2007-04-01
Unmanned vehicle systems is an attractive technology for the military, but whose promises have remained largely undelivered. There currently exist fielded remote controlled UGVs and high altitude UAV whose benefits are based on standoff in low complexity environments with sufficiently low control reaction time requirements to allow for teleoperation. While effective within there limited operational niche such systems do not meet with the vision of future military UxV scenarios. Such scenarios envision unmanned vehicles operating effectively in complex environments and situations with high levels of independence and effective coordination with other machines and humans pursing high level, changing and sometimes conflicting goals. While these aims are clearly ambitious they do provide necessary targets and inspiration with hopes of fielding near term useful semi-autonomous unmanned systems. Autonomy involves many fields of research including machine vision, artificial intelligence, control theory, machine learning and distributed systems all of which are intertwined and have goals of creating more versatile broadly applicable algorithms. Cohort is a major Applied Research Program (ARP) led by Defence R&D Canada (DRDC) Suffield and its aim is to develop coordinated teams of unmanned vehicles (UxVs) for urban environments. This paper will discuss the critical science being addressed by DRDC developing semi-autonomous systems.
NASA Astrophysics Data System (ADS)
Leena, N.; Saju, K. K.
2018-04-01
Nutritional deficiencies in plants are a major concern for farmers as it affects productivity and thus profit. The work aims to classify nutritional deficiencies in maize plant in a non-destructive mannerusing image processing and machine learning techniques. The colored images of the leaves are analyzed and classified with multi-class support vector machine (SVM) method. Several images of maize leaves with known deficiencies like nitrogen, phosphorous and potassium (NPK) are used to train the SVM classifier prior to the classification of test images. The results show that the method was able to classify and identify nutritional deficiencies.
Brewer, Margo
2016-09-01
Creating a vision (visioning) and sensemaking have been described as key leadership practices in the leadership literature. A vision provides clarity, motivation, and direction for staff, and is essential particularly in times of significant change. Closely related to visioning is sensemaking (the organisation of stimuli into a framework allowing people to understand, explain, attribute, extrapolate, and predict). The application of these strategies to leadership within the interprofessional field is yet to be scrutinised. This study examines an interprofessional capability framework as a visioning and sensemaking tool for use by leaders within a university health science curriculum. Interviews with 11 faculty members revealed that the framework had been embedded across multiple years and contexts within the curriculum. Furthermore, a range of responses to the framework were evoked in relation to its use to make sense of interprofessional practice and to provide a vision, guide, and focus for faculty. Overall the findings indicate that the framework can function as both a visioning and sensemaking tool.
Job Prospects for Manufacturing Engineers.
ERIC Educational Resources Information Center
Basta, Nicholas
1985-01-01
Coming from a variety of disciplines, manufacturing engineers are keys to industry's efforts to modernize, with demand exceeding supply. The newest and fastest-growing areas include machine vision, composite materials, and manufacturing automation protocols, each of which is briefly discussed. (JN)
Defect detection and classification of machined surfaces under multiple illuminant directions
NASA Astrophysics Data System (ADS)
Liao, Yi; Weng, Xin; Swonger, C. W.; Ni, Jun
2010-08-01
Continuous improvement of product quality is crucial to the successful and competitive automotive manufacturing industry in the 21st century. The presence of surface porosity located on flat machined surfaces such as cylinder heads/blocks and transmission cases may allow leaks of coolant, oil, or combustion gas between critical mating surfaces, thus causing damage to the engine or transmission. Therefore 100% inline inspection plays an important role for improving product quality. Although the techniques of image processing and machine vision have been applied to machined surface inspection and well improved in the past 20 years, in today's automotive industry, surface porosity inspection is still done by skilled humans, which is costly, tedious, time consuming and not capable of reliably detecting small defects. In our study, an automated defect detection and classification system for flat machined surfaces has been designed and constructed. In this paper, the importance of the illuminant direction in a machine vision system was first emphasized and then the surface defect inspection system under multiple directional illuminations was designed and constructed. After that, image processing algorithms were developed to realize 5 types of 2D or 3D surface defects (pore, 2D blemish, residue dirt, scratch, and gouge) detection and classification. The steps of image processing include: (1) image acquisition and contrast enhancement (2) defect segmentation and feature extraction (3) defect classification. An artificial machined surface and an actual automotive part: cylinder head surface were tested and, as a result, microscopic surface defects can be accurately detected and assigned to a surface defect class. The cycle time of this system can be sufficiently fast that implementation of 100% inline inspection is feasible. The field of view of this system is 150mm×225mm and the surfaces larger than the field of view can be stitched together in software.
The universal numbers. From Biology to Physics.
Marchal, Bruno
2015-12-01
I will explain how the mathematicians have discovered the universal numbers, or abstract computer, and I will explain some abstract biology, mainly self-reproduction and embryogenesis. Then I will explain how and why, and in which sense, some of those numbers can dream and why their dreams can glue together and must, when we assume computationalism in cognitive science, generate a phenomenological physics, as part of a larger phenomenological theology (in the sense of the greek theologians). The title should have been "From Biology to Physics, through the Phenomenological Theology of the Universal Numbers", if that was not too long for a title. The theology will consist mainly, like in some (neo)platonist greek-indian-chinese tradition, in the truth about numbers' relative relations, with each others, and with themselves. The main difference between Aristotle and Plato is that Aristotle (especially in its common and modern christian interpretation) makes reality WYSIWYG (What you see is what you get: reality is what we observe, measure, i.e. the natural material physical science) where for Plato and the (rational) mystics, what we see might be only the shadow or the border of something else, which might be non physical (mathematical, arithmetical, theological, …). Since Gödel, we know that Truth, even just the Arithmetical Truth, is vastly bigger than what the machine can rationally justify. Yet, with Church's thesis, and the mechanizability of the diagonalizations involved, machines can apprehend this and can justify their limitations, and get some sense of what might be true beyond what they can prove or justify rationally. Indeed, the incompleteness phenomenon introduces a gap between what is provable by some machine and what is true about that machine, and, as Gödel saw already in 1931, the existence of that gap is accessible to the machine itself, once it is has enough provability abilities. Incompleteness separates truth and provable, and machines can justify this in some way. More importantly incompleteness entails the distinction between many intensional variants of provability. For example, the absence of reflexion (beweisbar(⌜A⌝) → A with beweisbar being Gödel's provability predicate) makes it impossible for the machine's provability to obey the axioms usually taken for a theory of knowledge. The most important consequence of this in the machine's possible phenomenology is that it provides sense, indeed arithmetical sense, to intensional variants of provability, like the logics of provability-and-truth, which at the propositional level can be mirrored by the logic of provable-and-true statements (beweisbar(⌜A⌝) ∧ A). It is incompleteness which makes this logic different from the logic of provability. Other variants, like provable-and-consistent, or provable-and-consistent-and-true, appears in the same way, and inherits the incompleteness splitting, unlike beweisbar(⌜A⌝) ∧ A. I will recall thought experience which motivates the use of those intensional variants to associate a knower and an observer in some canonical way to the machines or the numbers. We will in this way get an abstract and phenomenological theology of a machine M through the true logics of their true self-referential abilities (even if not provable, or knowable, by the machine itself), in those different intensional senses. Cognitive science and theoretical physics motivate the study of those logics with the arithmetical interpretation of the atomic sentences restricted to the "verifiable" (Σ1) sentences, which is the way to study the theology of the computationalist machine. This provides a logic of the observable, as expected by the Universal Dovetailer Argument, which will be recalled briefly, and which can lead to a comparison of the machine's logic of physics with the empirical logic of the physicists (like quantum logic). This leads also to a series of open problems. Copyright © 2015 Elsevier Ltd. All rights reserved.
Computer vision cracks the leaf code
Wilf, Peter; Zhang, Shengping; Chikkerur, Sharat; Little, Stefan A.; Wing, Scott L.; Serre, Thomas
2016-01-01
Understanding the extremely variable, complex shape and venation characters of angiosperm leaves is one of the most challenging problems in botany. Machine learning offers opportunities to analyze large numbers of specimens, to discover novel leaf features of angiosperm clades that may have phylogenetic significance, and to use those characters to classify unknowns. Previous computer vision approaches have primarily focused on leaf identification at the species level. It remains an open question whether learning and classification are possible among major evolutionary groups such as families and orders, which usually contain hundreds to thousands of species each and exhibit many times the foliar variation of individual species. Here, we tested whether a computer vision algorithm could use a database of 7,597 leaf images from 2,001 genera to learn features of botanical families and orders, then classify novel images. The images are of cleared leaves, specimens that are chemically bleached, then stained to reveal venation. Machine learning was used to learn a codebook of visual elements representing leaf shape and venation patterns. The resulting automated system learned to classify images into families and orders with a success rate many times greater than chance. Of direct botanical interest, the responses of diagnostic features can be visualized on leaf images as heat maps, which are likely to prompt recognition and evolutionary interpretation of a wealth of novel morphological characters. With assistance from computer vision, leaves are poised to make numerous new contributions to systematic and paleobotanical studies. PMID:26951664
MLBCD: a machine learning tool for big clinical data.
Luo, Gang
2015-01-01
Predictive modeling is fundamental for extracting value from large clinical data sets, or "big clinical data," advancing clinical research, and improving healthcare. Machine learning is a powerful approach to predictive modeling. Two factors make machine learning challenging for healthcare researchers. First, before training a machine learning model, the values of one or more model parameters called hyper-parameters must typically be specified. Due to their inexperience with machine learning, it is hard for healthcare researchers to choose an appropriate algorithm and hyper-parameter values. Second, many clinical data are stored in a special format. These data must be iteratively transformed into the relational table format before conducting predictive modeling. This transformation is time-consuming and requires computing expertise. This paper presents our vision for and design of MLBCD (Machine Learning for Big Clinical Data), a new software system aiming to address these challenges and facilitate building machine learning predictive models using big clinical data. The paper describes MLBCD's design in detail. By making machine learning accessible to healthcare researchers, MLBCD will open the use of big clinical data and increase the ability to foster biomedical discovery and improve care.
The compound Atwood machine problem
NASA Astrophysics Data System (ADS)
Lopes Coelho, R.
2017-05-01
The present paper accounts for progress in physics teaching in the sense that a problem, which has been closed to students for being too difficult, is gained for the high school curriculum. This problem is the compound Atwood machine with three bodies. Its introduction into high school classes is based on a recent study on the weighing of an Atwood machine.
The Amazon Region; A Vision of Sovereignty
1998-04-06
and SPOT remote sensing satellites images, about 90% of the Amazon jungle remains almost untouched9. This 280 million hectares of vegetation hold...increasing energy needs, remain unanswered. Indian rights Has the Indian population been jeopardized by the development of the Amazon Region...or government agency. STRATEGY RESEARCH PROJECT THE AMAZON REGION; A VISION OF SOVEREIGNTY BY LIEUTENANT COLONEL EDUARDO JOSE BARBOSA
Vision based flight procedure stereo display system
NASA Astrophysics Data System (ADS)
Shen, Xiaoyun; Wan, Di; Ma, Lan; He, Yuncheng
2008-03-01
A virtual reality flight procedure vision system is introduced in this paper. The digital flight map database is established based on the Geographic Information System (GIS) and high definitions satellite remote sensing photos. The flight approaching area database is established through computer 3D modeling system and GIS. The area texture is generated from the remote sensing photos and aerial photographs in various level of detail. According to the flight approaching procedure, the flight navigation information is linked to the database. The flight approaching area vision can be dynamic displayed according to the designed flight procedure. The flight approaching area images are rendered in 2 channels, one for left eye images and the others for right eye images. Through the polarized stereoscopic projection system, the pilots and aircrew can get the vivid 3D vision of the flight destination approaching area. Take the use of this system in pilots preflight preparation procedure, the aircrew can get more vivid information along the flight destination approaching area. This system can improve the aviator's self-confidence before he carries out the flight mission, accordingly, the flight safety is improved. This system is also useful in validate the visual flight procedure design, and it helps to the flight procedure design.
NASA Technical Reports Server (NTRS)
Barrett, Eamon B. (Editor); Pearson, James J. (Editor)
1989-01-01
Image understanding concepts and models, image understanding systems and applications, advanced digital processors and software tools, and advanced man-machine interfaces are among the topics discussed. Particular papers are presented on such topics as neural networks for computer vision, object-based segmentation and color recognition in multispectral images, the application of image algebra to image measurement and feature extraction, and the integration of modeling and graphics to create an infrared signal processing test bed.
Neural network expert system for X-ray analysis of welded joints
NASA Astrophysics Data System (ADS)
Kozlov, V. V.; Lapik, N. V.; Popova, N. V.
2018-03-01
The use of intelligent technologies for the automated analysis of product quality is one of the main trends in modern machine building. At the same time, rapid development in various spheres of human activity is experienced by methods associated with the use of artificial neural networks, as the basis for building automated intelligent diagnostic systems. Technologies of machine vision allow one to effectively detect the presence of certain regularities in the analyzed designation, including defects of welded joints according to radiography data.
2009-04-29
visual coverage armed with responsive, high volume offensive or defensive door mounted machine guns and nearly 3600 weapons coverage. In addition, a fixed...forward gun option and forward firing rocket capacity substantially expand firepower options. Team this crew and machine with the world’s premiere...escort. 1 Headquarters, U. S. Marine Corps, Vision & Strategy 2025, (Washington, DC: Headquarters, U.S. Marine Corps, June 18, 2008), 9. 2 Scott Atwood
Proceedings of the NASA Conference on Space Telerobotics, volume 1
NASA Technical Reports Server (NTRS)
Rodriguez, Guillermo (Editor); Seraji, Homayoun (Editor)
1989-01-01
The theme of the Conference was man-machine collaboration in space. Topics addressed include: redundant manipulators; man-machine systems; telerobot architecture; remote sensing and planning; navigation; neural networks; fundamental AI research; and reasoning under uncertainty.
Bucak, Ihsan Ömür
2010-01-01
In the automotive industry, electromagnetic variable reluctance (VR) sensors have been extensively used to measure engine position and speed through a toothed wheel mounted on the crankshaft. In this work, an application that already uses the VR sensing unit for engine and/or transmission has been chosen to infer, this time, the indirect position of the electric machine in a parallel Hybrid Electric Vehicle (HEV) system. A VR sensor has been chosen to correct the position of the electric machine, mainly because it may still become critical in the operation of HEVs to avoid possible vehicle failures during the start-up and on-the-road, especially when the machine is used with an internal combustion engine. The proposed method uses Chi-square test and is adaptive in a sense that it derives the compensation factors during the shaft operation and updates them in a timely fashion.
Bucak, İhsan Ömür
2010-01-01
In the automotive industry, electromagnetic variable reluctance (VR) sensors have been extensively used to measure engine position and speed through a toothed wheel mounted on the crankshaft. In this work, an application that already uses the VR sensing unit for engine and/or transmission has been chosen to infer, this time, the indirect position of the electric machine in a parallel Hybrid Electric Vehicle (HEV) system. A VR sensor has been chosen to correct the position of the electric machine, mainly because it may still become critical in the operation of HEVs to avoid possible vehicle failures during the start-up and on-the-road, especially when the machine is used with an internal combustion engine. The proposed method uses Chi-square test and is adaptive in a sense that it derives the compensation factors during the shaft operation and updates them in a timely fashion. PMID:22294906
Multiple layer identification label using stacked identification symbols
NASA Technical Reports Server (NTRS)
Schramm, Harry F. (Inventor)
2005-01-01
An automatic identification system and method are provided which employ a machine readable multiple layer label. The label has a plurality of machine readable marking layers stacked one upon another. Each of the marking layers encodes an identification symbol detectable using one or more sensing technologies. The various marking layers may comprise the same marking material or each marking layer may comprise a different medium having characteristics detectable by a different sensing technology. These sensing technologies include x-ray, radar, capacitance, thermal, magnetic and ultrasonic. A complete symbol may be encoded within each marking layer or a symbol may be segmented into fragments which are then divided within a single marking layer or encoded across multiple marking layers.
A comparative study of machine learning models for ethnicity classification
NASA Astrophysics Data System (ADS)
Trivedi, Advait; Bessie Amali, D. Geraldine
2017-11-01
This paper endeavours to adopt a machine learning approach to solve the problem of ethnicity recognition. Ethnicity identification is an important vision problem with its use cases being extended to various domains. Despite the multitude of complexity involved, ethnicity identification comes naturally to humans. This meta information can be leveraged to make several decisions, be it in target marketing or security. With the recent development of intelligent systems a sub module to efficiently capture ethnicity would be useful in several use cases. Several attempts to identify an ideal learning model to represent a multi-ethnic dataset have been recorded. A comparative study of classifiers such as support vector machines, logistic regression has been documented. Experimental results indicate that the logical classifier provides a much accurate classification than the support vector machine.
Mewes, D; Trapp, R P
2000-01-01
Guards on machine tools are meant to protect operators from injuries caused by tools, workpieces, and fragments hurled out of the machine's working zone. This article presents the impact resistance requirements, which guards according to European safety standards for machine tools must satisfy. Based upon these standards the impact resistance of different guard materials was determined using cylindrical steel projectiles. Polycarbonate proves to be a suitable material for vision panels because of its high energy absorption capacity. The impact resistance of 8-mm thick polycarbonate is roughly equal to that of a 3-mm thick steel sheet Fe P01. The limited ageing stability, however, makes it necessary to protect polycarbonate against cooling lubricants by means of additional panes on both sides.
Technology and Web-Based Support
ERIC Educational Resources Information Center
Smith, Carol
2008-01-01
Many types of technology support caregiving: (1) Assistive devices include medicine dispensers, feeding and bathing machines, clothing with polypropylene fibers that stimulate muscles, intelligent ambulatory walkers for those with both vision and mobility impairment, medication reminders, and safety alarms; (2) Telecare devices ranging from…
Categorization of extraneous matter in cotton using machine vision systems
USDA-ARS?s Scientific Manuscript database
The Cotton Trash Identification System (CTIS) developed at the Southwestern Cotton Ginning Research Laboratory was evaluated for identification and categorization of extraneous matter in cotton. The CTIS bark/grass categorization was evaluated with USDA-Agricultural Marketing Service (AMS) extraneou...
Hyperspectral imaging for nondestructive evaluation of tomatoes
USDA-ARS?s Scientific Manuscript database
Machine vision methods for quality and defect evaluation of tomatoes have been studied for online sorting and robotic harvesting applications. We investigated the use of a hyperspectral imaging system for quality evaluation and defect detection for tomatoes. Hyperspectral reflectance images were a...
The influence of active vision on the exoskeleton of intelligent agents
NASA Astrophysics Data System (ADS)
Smith, Patrice; Terry, Theodore B.
2016-04-01
Chameleonization occurs when a self-learning autonomous mobile system's (SLAMR) active vision scans the surface of which it is perched causing the exoskeleton to changes colors exhibiting a chameleon effect. Intelligent agents having the ability to adapt to their environment and exhibit key survivability characteristics of its environments would largely be due in part to the use of active vision. Active vision would allow the intelligent agent to scan its environment and adapt as needed in order to avoid detection. The SLAMR system would have an exoskeleton, which would change, based on the surface it was perched on; this is known as the "chameleon effect." Not in the common sense of the term, but from the techno-bio inspired meaning as addressed in our previous paper. Active vision, utilizing stereoscopic color sensing functionality would enable the intelligent agent to scan an object within its close proximity, determine the color scheme, and match it; allowing the agent to blend with its environment. Through the use of its' optical capabilities, the SLAMR system would be able to further determine its position, taking into account spatial and temporal correlation and spatial frequency content of neighboring structures further ensuring successful background blending. The complex visual tasks of identifying objects, using edge detection, image filtering, and feature extraction are essential for an intelligent agent to gain additional knowledge about its environmental surroundings.
Non-contact capacitance based image sensing method and system
Novak, James L.; Wiczer, James J.
1995-01-01
A system and a method is provided for imaging desired surfaces of a workpiece. A sensor having first and second sensing electrodes which are electrically isolated from the workpiece is positioned above and in proximity to the desired surfaces of the workpiece. An electric field is developed between the first and second sensing electrodes of the sensor in response to input signals being applied thereto and capacitance signals are developed which are indicative of any disturbances in the electric field as a result of the workpiece. An image signal of the workpiece may be developed by processing the capacitance signals. The image signals may provide necessary control information to a machining device for machining the desired surfaces of the workpiece in processes such as deburring or chamfering. Also, the method and system may be used to image dimensions of weld pools on a workpiece and surfaces of glass vials. The sensor may include first and second preview sensors used to determine the feed rate of a workpiece with respect to the machining device.
Non-contact capacitance based image sensing method and system
Novak, James L.; Wiczer, James J.
1994-01-01
A system and a method for imaging desired surfaces of a workpiece. A sensor having first and second sensing electrodes which are electrically isolated from the workpiece is positioned above and in proximity to the desired surfaces of the workpiece. An electric field is developed between the first and second sensing electrodes of the sensor in response to input signals being applied thereto and capacitance signals are developed which are indicative of any disturbances in the electric field as a result of the workpiece. An image signal of the workpiece may be developed by processing the capacitance signals. The image signals may provide necessary control information to a machining device for machining the desired surfaces of the workpiece in processes such as deburring or chamfering. Also, the method and system may be used to image dimensions of weld pools on a workpiece and surfaces of glass vials. The sensor may include first and second preview sensors used to determine the feed rate of a workpiece with respect to the machining device.
Beltrán Ortega, Julio; Martínez Gila, Diego M; Aguilera Puerto, Daniel; Gámez García, Javier; Gómez Ortega, Juan
2016-11-01
The quality of virgin olive oil is related to the agronomic conditions of the olive fruits and the process variables of the production process. Nowadays, food markets demand better products in terms of safety, health and organoleptic properties with competitive prices. Innovative techniques for process control, inspection and classification have been developed in order to to achieve these requirements. This paper presents a review of the most significant sensing technologies which are increasingly used in the olive oil industry to supervise and control the virgin olive oil production process. Throughout the present work, the main research studies in the literature that employ non-invasive technologies such as infrared spectroscopy, computer vision, machine olfaction technology, electronic tongues and dielectric spectroscopy are analysed and their main results and conclusions are presented. These technologies are used on olive fruit, olive slurry and olive oil to determine parameters such as acidity, peroxide indexes, ripening indexes, organoleptic properties and minor components, among others. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
Nelson, Jonathan M.; Kinzel, Paul J.; Schmeeckle, Mark Walter; McDonald, Richard R.; Minear, Justin T.
2016-01-01
Noncontact methods for measuring water-surface elevation and velocity in laboratory flumes and rivers are presented with examples. Water-surface elevations are measured using an array of acoustic transducers in the laboratory and using laser scanning in field situations. Water-surface velocities are based on using particle image velocimetry or other machine vision techniques on infrared video of the water surface. Using spatial and temporal averaging, results from these methods provide information that can be used to develop estimates of discharge for flows over known bathymetry. Making such estimates requires relating water-surface velocities to vertically averaged velocities; the methods here use standard relations. To examine where these relations break down, laboratory data for flows over simple bumps of three amplitudes are evaluated. As anticipated, discharges determined from surface information can have large errors where nonhydrostatic effects are large. In addition to investigating and characterizing this potential error in estimating discharge, a simple method for correction of the issue is presented. With a simple correction based on bed gradient along the flow direction, remotely sensed estimates of discharge appear to be viable.
Big Data breaking barriers - first steps on a long trail
NASA Astrophysics Data System (ADS)
Schade, S.
2015-04-01
Most data sets and streams have a geospatial component. Some people even claim that about 80% of all data is related to location. In the era of Big Data this number might even be underestimated, as data sets interrelate and initially non-spatial data becomes indirectly geo-referenced. The optimal treatment of Big Data thus requires advanced methods and technologies for handling the geospatial aspects in data storage, processing, pattern recognition, prediction, visualisation and exploration. On the one hand, our work exploits earth and environmental sciences for existing interoperability standards, and the foundational data structures, algorithms and software that are required to meet these geospatial information handling tasks. On the other hand, we are concerned with the arising needs to combine human analysis capacities (intelligence augmentation) with machine power (artificial intelligence). This paper provides an overview of the emerging landscape and outlines our (Digital Earth) vision for addressing the upcoming issues. We particularly request the projection and re-use of the existing environmental, earth observation and remote sensing expertise in other sectors, i.e. to break the barriers of all of these silos by investigating integrated applications.
Intelligent Vision On The SM9O Mini-Computer Basis And Applications
NASA Astrophysics Data System (ADS)
Hawryszkiw, J.
1985-02-01
Distinction has to be made between image processing and vision Image processing finds its roots in the strong tradition of linear signal processing and promotes geometrical transform techniques, such as fi I tering , compression, and restoration. Its purpose is to transform an image for a human observer to easily extract from that image information significant for him. For example edges after a gradient operator, or a specific direction after a directional filtering operation. Image processing consists in fact in a set of local or global space-time transforms. The interpretation of the final image is done by the human observer. The purpose of vision is to extract the semantic content of the image. The machine can then understand that content, and run a process of decision, which turns into an action. Thus, intel I i gent vision depends on - Image processing - Pattern recognition - Artificial intel I igence
Quantum vision in three dimensions
NASA Astrophysics Data System (ADS)
Roth, Yehuda
We present four models for describing a 3-D vision. Similar to the mirror scenario, our models allow 3-D vision with no need for additional accessories such as stereoscopic glasses or a hologram film. These four models are based on brain interpretation rather than pure objective encryption. We consider the observer "subjective" selection of a measuring device and the corresponding quantum collapse into one of his selected states, as a tool for interpreting reality in according to the observer concepts. This is the basic concept of our study and it is introduced in the first model. Other models suggests "soften" versions that might be much easier to implement. Our quantum interpretation approach contribute to the following fields. In technology the proposed models can be implemented into real devices, allowing 3-D vision without additional accessories. Artificial intelligence: In the desire to create a machine that exchange information by using human terminologies, our interpretation approach seems to be appropriate.
Neuromorphic vision sensors and preprocessors in system applications
NASA Astrophysics Data System (ADS)
Kramer, Joerg; Indiveri, Giacomo
1998-09-01
A partial review of neuromorphic vision sensors that are suitable for use in autonomous systems is presented. Interfaces are being developed to multiplex the high- dimensional output signals of arrays of such sensors and to communicate them in standard formats to off-chip devices for higher-level processing, actuation, storage and display. Alternatively, on-chip processing stages may be implemented to extract sparse image parameters, thereby obviating the need for multiplexing. Autonomous robots are used to test neuromorphic vision chips in real-world environments and to explore the possibilities of data fusion from different sensing modalities. Examples of autonomous mobile systems that use neuromorphic vision chips for line tracking and optical flow matching are described.
Nose, Atsushi; Yamazaki, Tomohiro; Katayama, Hironobu; Uehara, Shuji; Kobayashi, Masatsugu; Shida, Sayaka; Odahara, Masaki; Takamiya, Kenichi; Matsumoto, Shizunori; Miyashita, Leo; Watanabe, Yoshihiro; Izawa, Takashi; Muramatsu, Yoshinori; Nitta, Yoshikazu; Ishikawa, Masatoshi
2018-04-24
We have developed a high-speed vision chip using 3D stacking technology to address the increasing demand for high-speed vision chips in diverse applications. The chip comprises a 1/3.2-inch, 1.27 Mpixel, 500 fps (0.31 Mpixel, 1000 fps, 2 × 2 binning) vision chip with 3D-stacked column-parallel Analog-to-Digital Converters (ADCs) and 140 Giga Operation per Second (GOPS) programmable Single Instruction Multiple Data (SIMD) column-parallel PEs for new sensing applications. The 3D-stacked structure and column parallel processing architecture achieve high sensitivity, high resolution, and high-accuracy object positioning.
Ergonomics for enhancing detection of machine abnormalities.
Illankoon, Prasanna; Abeysekera, John; Singh, Sarbjeet
2016-10-17
Detecting abnormal machine conditions is of great importance in an autonomous maintenance environment. Ergonomic aspects can be invaluable when detection of machine abnormalities using human senses is examined. This research outlines the ergonomic issues involved in detecting machine abnormalities and suggests how ergonomics would improve such detections. Cognitive Task Analysis was performed in a plant in Sri Lanka where Total Productive Maintenance is being implemented to identify sensory types that would be used to detect machine abnormalities and relevant Ergonomic characteristics. As the outcome of this research, a methodology comprising of an Ergonomic Gap Analysis Matrix for machine abnormality detection is presented.
Visual tracking strategies for intelligent vehicle highway systems
NASA Astrophysics Data System (ADS)
Smith, Christopher E.; Papanikolopoulos, Nikolaos P.; Brandt, Scott A.; Richards, Charles
1995-01-01
The complexity and congestion of current transportation systems often produce traffic situations that jeopardize the safety of the people involved. These situations vary from maintaining a safe distance behind a leading vehicle to safely allowing a pedestrian to cross a busy street. Environmental sensing plays a critical role in virtually all of these situations. Of the sensors available, vision sensors provide information that is richer and more complete than other sensors, making them a logical choice for a multisensor transportation system. In this paper we present robust techniques for intelligent vehicle-highway applications where computer vision plays a crucial role. In particular, we demonstrate that the controlled active vision framework can be utilized to provide a visual sensing modality to a traffic advisory system in order to increase the overall safety margin in a variety of common traffic situations. We have selected two application examples, vehicle tracking and pedestrian tracking, to demonstrate that the framework can provide precisely the type of information required to effectively manage the given situation.
Robust tracking of dexterous continuum robots: Fusing FBG shape sensing and stereo vision.
Rumei Zhang; Hao Liu; Jianda Han
2017-07-01
Robust and efficient tracking of continuum robots is important for improving patient safety during space-confined minimally invasive surgery, however, it has been a particularly challenging task for researchers. In this paper, we present a novel tracking scheme by fusing fiber Bragg grating (FBG) shape sensing and stereo vision to estimate the position of continuum robots. Previous visual tracking easily suffers from the lack of robustness and leads to failure, while the FBG shape sensor can only reconstruct the local shape with integral cumulative error. The proposed fusion is anticipated to compensate for their shortcomings and improve the tracking accuracy. To verify its effectiveness, the robots' centerline is recognized by morphology operation and reconstructed by stereo matching algorithm. The shape obtained by FBG sensor is transformed into distal tip position with respect to the camera coordinate system through previously calibrated registration matrices. An experimental platform was set up and repeated tracking experiments were carried out. The accuracy estimated by averaging the absolute positioning errors between shape sensing and stereo vision is 0.67±0.65 mm, 0.41±0.25 mm, 0.72±0.43 mm for x, y and z, respectively. Results indicate that the proposed fusion is feasible and can be used for closed-loop control of continuum robots.
Visual and haptic integration in the estimation of softness of deformable objects
Cellini, Cristiano; Kaim, Lukas; Drewing, Knut
2013-01-01
Softness perception intrinsically relies on haptic information. However, through everyday experiences we learn correspondences between felt softness and the visual effects of exploratory movements that are executed to feel softness. Here, we studied how visual and haptic information is integrated to assess the softness of deformable objects. Participants discriminated between the softness of two softer or two harder objects using only-visual, only-haptic or both visual and haptic information. We assessed the reliabilities of the softness judgments using the method of constant stimuli. In visuo-haptic trials, discrepancies between the two senses' information allowed us to measure the contribution of the individual senses to the judgments. Visual information (finger movement and object deformation) was simulated using computer graphics; input in visual trials was taken from previous visuo-haptic trials. Participants were able to infer softness from vision alone, and vision considerably contributed to bisensory judgments (∼35%). The visual contribution was higher than predicted from models of optimal integration (senses are weighted according to their reliabilities). Bisensory judgments were less reliable than predicted from optimal integration. We conclude that the visuo-haptic integration of softness information is biased toward vision, rather than being optimal, and might even be guided by a fixed weighting scheme. PMID:25165510
Jonathan P. Dandois; Erle C. Ellis
2013-01-01
High spatial resolution three-dimensional (3D) measurements of vegetation by remote sensing are advancing ecological research and environmental management. However, substantial economic and logistical costs limit this application, especially for observing phenological dynamics in ecosystem structure and spectral traits. Here we demonstrate a new aerial remote sensing...
Vision in Plants via Plant-Specific Ocelli?
Baluška, Frantisek; Mancuso, Stefano
2016-09-01
Although plants are sessile organisms, almost all of their organs move in space and thus require plant-specific senses to find their proper place with respect to their neighbours. Here we discuss recent studies suggesting that plants are able to sense shapes and colours via plant-specific ocelli. Copyright © 2016 Elsevier Ltd. All rights reserved.
Wireless sensor systems for sense/decide/act/communicate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berry, Nina M.; Cushner, Adam; Baker, James A.
2003-12-01
After 9/11, the United States (U.S.) was suddenly pushed into challenging situations they could no longer ignore as simple spectators. The War on Terrorism (WoT) was suddenly ignited and no one knows when this war will end. While the government is exploring many existing and potential technologies, the area of wireless Sensor networks (WSN) has emerged as a foundation for establish future national security. Unlike other technologies, WSN could provide virtual presence capabilities needed for precision awareness and response in military, intelligence, and homeland security applications. The Advance Concept Group (ACG) vision of Sense/Decide/Act/Communicate (SDAC) sensor system is an instantiationmore » of the WSN concept that takes a 'systems of systems' view. Each sensing nodes will exhibit the ability to: Sense the environment around them, Decide as a collective what the situation of their environment is, Act in an intelligent and coordinated manner in response to this situational determination, and Communicate their actions amongst each other and to a human command. This LDRD report provides a review of the research and development done to bring the SDAC vision closer to reality.« less
2016-01-01
Language acquisition is based on our knowledge about the world and forms through multiple sensory-motor interactions with the environment. We link the properties of individual experience formed at different stages of ontogeny with the phased development of sensory modalities and with the acquisition of words describing the appropriate forms of sensitivity. To test whether early-formed experience related to skin sensations, olfaction and taste differs from later-formed experience related to vision and hearing, we asked Russian-speaking participants to categorize or to assess the pleasantness of experience mentally reactivated by sense-related adjectives found in common dictionaries. It was found that categorizing adjectives in relation to vision, hearing and skin sensations took longer than categorizing adjectives in relation to olfaction and taste. In addition, experience described by adjectives predominantly related to vision, hearing and skin sensations took more time for the pleasantness judgment and generated less intense emotions than that described by adjectives predominantly related to olfaction and taste. Interestingly the dynamics of skin resistance corresponded to the intensity and pleasantness of reported emotions. We also found that sense-related experience described by early-acquired adjectives took less time for the pleasantness judgment and generated more intense and more positive emotions than that described by later-acquired adjectives. Correlations were found between the time of the pleasantness judgment of experience, intensity and pleasantness of reported emotions, age of acquisition, frequency, imageability and length of sense-related adjectives. All in all these findings support the hypothesis that early-formed experience is less differentiated than later-formed experience. PMID:27400090
Multiple Optical Filter Design Simulation Results
NASA Astrophysics Data System (ADS)
Mendelsohn, J.; Englund, D. C.
1986-10-01
In this paper we continue our investigation of the application of matched filters to robotic vision problems. Specifically, we are concerned with the tray-picking problem. Our principal interest in this paper is the examination of summation affects which arise from attempting to reduce the matched filter memory size by averaging of matched filters. While the implementation of matched filtering theory to applications in pattern recognition or machine vision is ideally through the use of optics and optical correlators, in this paper the results were obtained through a digital simulation of the optical process.
Before They Can Speak, They Must Know.
ERIC Educational Resources Information Center
Cromie, William J.; Edson, Lee
1984-01-01
Intelligent relationships with people are among the goals for tomorrow's computers. Knowledge-based systems used and being developed to achieve these goals are discussed. Automatic learning, producing inferences, parallelism, program languages, friendly machines, computer vision, and biomodels are among the topics considered. (JN)
Robust crop and weed segmentation under uncontrolled outdoor illumination
USDA-ARS?s Scientific Manuscript database
A new machine vision for weed detection was developed from RGB color model images. Processes included in the algorithm for the detection were excessive green conversion, threshold value computation by statistical analysis, adaptive image segmentation by adjusting the threshold value, median filter, ...
Teaching Camera Calibration by a Constructivist Methodology
ERIC Educational Resources Information Center
Samper, D.; Santolaria, J.; Pastor, J. J.; Aguilar, J. J.
2010-01-01
This article describes the Metrovisionlab simulation software and practical sessions designed to teach the most important machine vision camera calibration aspects in courses for senior undergraduate students. By following a constructivist methodology, having received introductory theoretical classes, students use the Metrovisionlab application to…
CATEGORIZATION OF EXTRANEOUS MATTER IN COTTON USING MACHINE VISION SYSTEMS
USDA-ARS?s Scientific Manuscript database
The Cotton Trash Identification System (CTIS) was developed at the Southwestern Cotton Ginning Research Laboratory to identify and categorize extraneous matter in cotton. The CTIS bark/grass categorization was evaluated with USDA-Agricultural Marketing Service (AMS) extraneous matter calls assigned ...
DOT National Transportation Integrated Search
2002-12-01
The Virginia Department of Transportation, like many other transportation agencies, has invested significantly in extensive closed circuit television (CCTV) systems to monitor freeways in urban areas. Although these systems have proven very effective...
A fuzzy structural matching scheme for space robotics vision
NASA Technical Reports Server (NTRS)
Naka, Masao; Yamamoto, Hiromichi; Homma, Khozo; Iwata, Yoshitaka
1994-01-01
In this paper, we propose a new fuzzy structural matching scheme for space stereo vision which is based on the fuzzy properties of regions of images and effectively reduces the computational burden in the following low level matching process. Three dimensional distance images of a space truss structural model are estimated using this scheme from stereo images sensed by Charge Coupled Device (CCD) TV cameras.
ERIC Educational Resources Information Center
Currie, John R.
The principal of a newly opened elementary school implemented a practicum study designed to unify faculty, parents, staff, and children; add direction to the program; develop a sense of purpose; and increase participation. It was expected that a vision statement would be developed in the school's first year of operation, and that parents and staff…
ERIC Educational Resources Information Center
Winkler-Reid, Sarah
2017-01-01
This article explores the enskillment of vision through which girls in a London school learn to see bodies and selves in particular ways. Dominant body and beauty ideals inform these processes but cannot be reduced to them. Drawing from the anthropological literature on situated learning and the senses, I propose an anthropological approach to…
Local Learning Strategies for Wake Identification
NASA Astrophysics Data System (ADS)
Colvert, Brendan; Alsalman, Mohamad; Kanso, Eva
2017-11-01
Swimming agents, biological and engineered alike, must navigate the underwater environment to survive. Tasks such as autonomous navigation, foraging, mating, and predation require the ability to extract critical cues from the hydrodynamic environment. A substantial body of evidence supports the hypothesis that biological systems leverage local sensing modalities, including flow sensing, to gain knowledge of their global surroundings. The nonlinear nature and high degree of complexity of fluid dynamics makes the development of algorithms for implementing localized sensing in bioinspired engineering systems essentially intractable for many systems of practical interest. In this work, we use techniques from machine learning for training a bioinspired swimmer to learn from its environment. We demonstrate the efficacy of this strategy by learning how to sense global characteristics of the wakes of other swimmers measured only from local sensory information. We conclude by commenting on the advantages and limitations of this data-driven, machine learning approach and its potential impact on broader applications in underwater sensing and navigation.
Compact Microscope Imaging System with Intelligent Controls
NASA Technical Reports Server (NTRS)
McDowell, Mark
2004-01-01
The figure presents selected views of a compact microscope imaging system (CMIS) that includes a miniature video microscope, a Cartesian robot (a computer- controlled three-dimensional translation stage), and machine-vision and control subsystems. The CMIS was built from commercial off-the-shelf instrumentation, computer hardware and software, and custom machine-vision software. The machine-vision and control subsystems include adaptive neural networks that afford a measure of artificial intelligence. The CMIS can perform several automated tasks with accuracy and repeatability . tasks that, heretofore, have required the full attention of human technicians using relatively bulky conventional microscopes. In addition, the automation and control capabilities of the system inherently include a capability for remote control. Unlike human technicians, the CMIS is not at risk of becoming fatigued or distracted: theoretically, it can perform continuously at the level of the best human technicians. In its capabilities for remote control and for relieving human technicians of tedious routine tasks, the CMIS is expected to be especially useful in biomedical research, materials science, inspection of parts on industrial production lines, and space science. The CMIS can automatically focus on and scan a microscope sample, find areas of interest, record the resulting images, and analyze images from multiple samples simultaneously. Automatic focusing is an iterative process: The translation stage is used to move the microscope along its optical axis in a succession of coarse, medium, and fine steps. A fast Fourier transform (FFT) of the image is computed at each step, and the FFT is analyzed for its spatial-frequency content. The microscope position that results in the greatest dispersal of FFT content toward high spatial frequencies (indicating that the image shows the greatest amount of detail) is deemed to be the focal position.
Detection of oranges from a color image of an orange tree
NASA Astrophysics Data System (ADS)
Weeks, Arthur R.; Gallagher, A.; Eriksson, J.
1999-10-01
The progress of robotic and machine vision technology has increased the demand for sophisticated methods for performing automatic harvesting of fruit. The harvesting of fruit, until recently, has been performed manually and is quite labor intensive. An automatic robot harvesting system that uses machine vision to locate and extract the fruit would free the agricultural industry from the ups and downs of the labor market. The environment in which robotic fruit harvesters must work presents many challenges due to the inherent variability from one location to the next. This paper takes a step towards this goal by outlining a machine vision algorithm that detects and accurately locates oranges from a color image of an orange tree. Previous work in this area has focused on differentiating the orange regions from the rest of the picture and not locating the actual oranges themselves. Failure to locate the oranges, however, leads to a reduced number of successful pick attempts. This paper presents a new approach for orange region segmentation in which the circumference of the individual oranges as well as partially occluded oranges are located. Accurately defining the circumference of each orange allows a robotic harvester to cut the stem of the orange by either scanning the top of the orange with a laser or by directing a robotic arm towards the stem to automatically cut it. A modified version of the K- means algorithm is used to initially segment the oranges from the canopy of the orange tree. Morphological processing is then used to locate occluded oranges and an iterative circle finding algorithm is used to define the circumference of the segmented oranges.
Anomalous Cases of Astronaut Helmet Detection
NASA Technical Reports Server (NTRS)
Dolph, Chester; Moore, Andrew J.; Schubert, Matthew; Woodell, Glenn
2015-01-01
An astronaut's helmet is an invariant, rigid image element that is well suited for identification and tracking using current machine vision technology. Future space exploration will benefit from the development of astronaut detection software for search and rescue missions based on EVA helmet identification. However, helmets are solid white, except for metal brackets to attach accessories such as supplementary lights. We compared the performance of a widely used machine vision pipeline on a standard-issue NASA helmet with and without affixed experimental feature-rich patterns. Performance on the patterned helmet was far more robust. We found that four different feature-rich patterns are sufficient to identify a helmet and determine orientation as it is rotated about the yaw, pitch, and roll axes. During helmet rotation the field of view changes to frames containing parts of two or more feature-rich patterns. We took reference images in these locations to fill in detection gaps. These multiple feature-rich patterns references added substantial benefit to detection, however, they generated the majority of the anomalous cases. In these few instances, our algorithm keys in on one feature-rich pattern of the multiple feature-rich pattern reference and makes an incorrect prediction of the location of the other feature-rich patterns. We describe and make recommendations on ways to mitigate anomalous cases in which detection of one or more feature-rich patterns fails. While the number of cases is only a small percentage of the tested helmet orientations, they illustrate important design considerations for future spacesuits. In addition to our four successful feature-rich patterns, we present unsuccessful patterns and discuss the cause of their poor performance from a machine vision perspective. Future helmets designed with these considerations will enable automated astronaut detection and thereby enhance mission operations and extraterrestrial search and rescue.
Janve, Bhaskar; Yang, Wade; Sims, Charles
2015-06-01
Power ultrasound reduces the traditional corn steeping time from 18 to 1.5 h during tortilla chips dough (masa) processing. This study sought to examine consumer (n = 99) acceptability and quality of tortilla chips made from the masa by traditional compared with ultrasonic methods. Overall appearance, flavor, and texture acceptability scores were evaluated using a 9-point hedonic scale. The baked chips (process intermediate) before and after frying (finished product) were analyzed using a texture analyzer and machine vision. The texture values were determined using the 3-point bend test using breaking force gradient (BFG), peak breaking force (PBF), and breaking distance (BD). The fracturing properties determined by the crisp fracture support rig using fracture force gradient (FFG), peak fracture force (PFF), and fracture distance (FD). The machine vision evaluated the total surface area, lightness (L), color difference (ΔE), Hue (°h), and Chroma (C*). The results were evaluated by analysis of variance and means were separated using Tukey's test. Machine vision values of L, °h, were higher (P < 0.05) and ΔE was lower (P < 0.05) for fried and L, °h were significantly (P < 0.05) higher for baked chips produced from ultra-sonication as compare to traditional. Baked chips texture for ultra-sonication was significantly higher (P < 0.05) on BFG, BPD, PFF, and FD. Fried tortilla chips texture were higher significantly (P < 0.05) in BFG and PFF for ultra-sonication than traditional processing. However, the instrumental differences were not detected in sensory analysis, concluding possibility of power ultrasound as potential tortilla chips processing aid. © 2015 Institute of Food Technologists®
NASA Technical Reports Server (NTRS)
Tendam, I. M. (Editor); Morrison, D. B.
1979-01-01
Papers are presented on techniques and applications for the machine processing of remotely sensed data. Specific topics include the Landsat-D mission and thematic mapper, data preprocessing to account for atmospheric and solar illumination effects, sampling in crop area estimation, the LACIE program, the assessment of revegetation on surface mine land using color infrared aerial photography, the identification of surface-disturbed features through a nonparametric analysis of Landsat MSS data, the extraction of soil data in vegetated areas, and the transfer of remote sensing computer technology to developing nations. Attention is also given to the classification of multispectral remote sensing data using context, the use of guided clustering techniques for Landsat data analysis in forest land cover mapping, crop classification using an interactive color display, and future trends in image processing software and hardware.
Visual enhancing of tactile perception in the posterior parietal cortex.
Ro, Tony; Wallace, Ruth; Hagedorn, Judith; Farnè, Alessandro; Pienkos, Elizabeth
2004-01-01
The visual modality typically dominates over our other senses. Here we show that after inducing an extreme conflict in the left hand between vision of touch (present) and the feeling of touch (absent), sensitivity to touch increases for several minutes after the conflict. Transcranial magnetic stimulation of the posterior parietal cortex after this conflict not only eliminated the enduring visual enhancement of touch, but also impaired normal tactile perception. This latter finding demonstrates a direct role of the parietal lobe in modulating tactile perception as a result of the conflict between these senses. These results provide evidence for visual-to-tactile perceptual modulation and demonstrate effects of illusory vision of touch on touch perception through a long-lasting modulatory process in the posterior parietal cortex.
DOT National Transportation Integrated Search
2016-01-01
State highway agencies (SHAs) routinely employ semi-automated and automated image-based methods for network-level : pavement-cracking data collection, and there are different types of pavement-cracking data collected by SHAs for reporting and : manag...
A Starter's Guide to Artificial Intelligence.
ERIC Educational Resources Information Center
McConnell, Barry A.; McConnell, Nancy J.
1988-01-01
Discussion of the history and development of artificial intelligence (AI) highlights a bibliography of introductory books on various aspects of AI, including AI programing; problem solving; automated reasoning; game playing; natural language; expert systems; machine learning; robotics and vision; critics of AI; and representative software. (LRW)
Detection of eviscerated poultry spleen enlargement by machine vision
NASA Astrophysics Data System (ADS)
Tao, Yang; Shao, June J.; Skeeles, John K.; Chen, Yud-Ren
1999-01-01
The size of a poultry spleen is an indication of whether the bird is wholesomeness or has a virus-related disease. This study explored the possibility of detecting poultry spleen enlargement with a computer imaging system to assist human inspectors in food safety inspections. Images of 45-day-old hybrid turkey internal viscera were taken using fluorescent and UV lighting systems. Image processing algorithms including linear transformation, morphological operations, and statistical analyses were developed to distinguish the spleen from its surroundings and then to detect abnormal spleens. Experimental results demonstrated that the imaging method could effectively distinguish spleens from other organ and intestine. Based on a total sample of 57 birds, the classification rates were 92% from a self-test set, and 95% from an independent test set for the correct detection of normal and abnormal birds. The methodology indicated the feasibility of using automated machine vision systems in the future to inspect internal organs and check the wholesomeness of poultry carcasses.
Vision Guided Intelligent Robot Design And Experiments
NASA Astrophysics Data System (ADS)
Slutzky, G. D.; Hall, E. L.
1988-02-01
The concept of an intelligent robot is an important topic combining sensors, manipulators, and artificial intelligence to design a useful machine. Vision systems, tactile sensors, proximity switches and other sensors provide the elements necessary for simple game playing as well as industrial applications. These sensors permit adaption to a changing environment. The AI techniques permit advanced forms of decision making, adaptive responses, and learning while the manipulator provides the ability to perform various tasks. Computer languages such as LISP and OPS5, have been utilized to achieve expert systems approaches in solving real world problems. The purpose of this paper is to describe several examples of visually guided intelligent robots including both stationary and mobile robots. Demonstrations will be presented of a system for constructing and solving a popular peg game, a robot lawn mower, and a box stacking robot. The experience gained from these and other systems provide insight into what may be realistically expected from the next generation of intelligent machines.
Vita, Randi; Overton, James A; Mungall, Christopher J; Sette, Alessandro
2018-01-01
Abstract The Immune Epitope Database (IEDB), at www.iedb.org, has the mission to make published experimental data relating to the recognition of immune epitopes easily available to the scientific public. By presenting curated data in a searchable database, we have liberated it from the tables and figures of journal articles, making it more accessible and usable by immunologists. Recently, the principles of Findability, Accessibility, Interoperability and Reusability have been formulated as goals that data repositories should meet to enhance the usefulness of their data holdings. We here examine how the IEDB complies with these principles and identify broad areas of success, but also areas for improvement. We describe short-term improvements to the IEDB that are being implemented now, as well as a long-term vision of true ‘machine-actionable interoperability’, which we believe will require community agreement on standardization of knowledge representation that can be built on top of the shared use of ontologies. PMID:29688354
Machine vision based teleoperation aid
NASA Technical Reports Server (NTRS)
Hoff, William A.; Gatrell, Lance B.; Spofford, John R.
1991-01-01
When teleoperating a robot using video from a remote camera, it is difficult for the operator to gauge depth and orientation from a single view. In addition, there are situations where a camera mounted for viewing by the teleoperator during a teleoperation task may not be able to see the tool tip, or the viewing angle may not be intuitive (requiring extensive training to reduce the risk of incorrect or dangerous moves by the teleoperator). A machine vision based teleoperator aid is presented which uses the operator's camera view to compute an object's pose (position and orientation), and then overlays onto the operator's screen information on the object's current and desired positions. The operator can choose to display orientation and translation information as graphics and/or text. This aid provides easily assimilated depth and relative orientation information to the teleoperator. The camera may be mounted at any known orientation relative to the tool tip. A preliminary experiment with human operators was conducted and showed that task accuracies were significantly greater with than without this aid.
Calibrators measurement system for headlamp tester of motor vehicle base on machine vision
NASA Astrophysics Data System (ADS)
Pan, Yue; Zhang, Fan; Xu, Xi-ping; Zheng, Zhe
2014-09-01
With the development of photoelectric detection technology, machine vision has a wider use in the field of industry. The paper mainly introduces auto lamps tester calibrator measuring system, of which CCD image sampling system is the core. Also, it shows the measuring principle of optical axial angle and light intensity, and proves the linear relationship between calibrator's facula illumination and image plane illumination. The paper provides an important specification of CCD imaging system. Image processing by MATLAB can get flare's geometric midpoint and average gray level. By fitting the statistics via the method of the least square, we can get regression equation of illumination and gray level. It analyzes the error of experimental result of measurement system, and gives the standard uncertainty of synthesis and the resource of optical axial angle. Optical axial angle's average measuring accuracy is controlled within 40''. The whole testing process uses digital means instead of artificial factors, which has higher accuracy, more repeatability and better mentality than any other measuring systems.
Some distinguishing characteristics of contour and texture phenomena in images
NASA Technical Reports Server (NTRS)
Jobson, Daniel J.
1992-01-01
The development of generalized contour/texture discrimination techniques is a central element necessary for machine vision recognition and interpretation of arbitrary images. Here, the visual perception of texture, selected studies of texture analysis in machine vision, and diverse small samples of contour and texture are all used to provide insights into the fundamental characteristics of contour and texture. From these, an experimental discrimination scheme is developed and tested on a battery of natural images. The visual perception of texture defined fine texture as a subclass which is interpreted as shading and is distinct from coarse figural similarity textures. Also, perception defined the smallest scale for contour/texture discrimination as eight to nine visual acuity units. Three contour/texture discrimination parameters were found to be moderately successful for this scale discrimination: (1) lightness change in a blurred version of the image, (2) change in lightness change in the original image, and (3) percent change in edge counts relative to local maximum.
2004-02-20
KENNEDY SPACE CENTER, FLA. - Center Director Jim Kennedy talks to students in Garland V. Stewart Magnet Middle School, a NASA Explorer School (NES) in Tampa, Fla. Kennedy made the trip with NASA astronaut Kay Hire to share the agency’s new vision for space exploration with the next generation of explorers. Kennedy is talking with students about our destiny as explorers, NASA’s stepping stone approach to exploring Earth, the Moon, Mars and beyond, how space impacts our lives, and how people and machines rely on each other in space.
2004-02-20
KENNEDY SPACE CENTER, FLA. - Astronaut Kay Hire talks to students in Garland V. Stewart Magnet Middle School, a NASA Explorer School (NES) in Tampa, Fla. She joined Center Director Jim Kennedy in sharing the agency’s new vision for space exploration with the next generation of explorers. Kennedy is talking with students about our destiny as explorers, NASA’s stepping stone approach to exploring Earth, the Moon, Mars and beyond, how space impacts our lives, and how people and machines rely on each other in space.
2004-02-20
KENNEDY SPACE CENTER, FLA. - Students at Garland V. Stewart Magnet Middle School, a NASA Explorer School (NES) in Tampa, Fla., listen attentively to astronaut Kay Hire. She and Center Director Jim Kennedy were at the school to share the agency’s new vision for space exploration with the next generation of explorers. Kennedy is talking with students about our destiny as explorers, NASA’s stepping stone approach to exploring Earth, the Moon, Mars and beyond, how space impacts our lives, and how people and machines rely on each other in space.
Hsu, Ao-Lin; Feng, Zhaoyang; Hsieh, Meng-Yin; Xu, X. Z. Shawn
2009-01-01
One challenge in aging research concerns identifying physiological parameters or biomarkers that can reflect the physical health of an animal and predict its lifespan. In C. elegans, a model organism widely used in aging research, motor deficits develop in old worms. Here we employed machine vision to quantify worm locomotion behavior throughout lifespan. We confirm that aging worms undergo a progressive decline in motor activity, beginning in early life. Importantly, the rate of motor activity decline rather than the absolute motor activity in the early-to-mid life of individual worms in an isogenic population inversely correlates with their lifespan, and thus may serve as a lifespan predictor. Long-lived mutant strains with deficits in insulin/IGF-1 signaling or food intake display a reduction in the rate of motor activity decline, suggesting that this parameter might also be used for across-strain comparison of healthspan. Our work identifies an endogenous physiological parameter for lifespan prediction and healthspan comparison. PMID:18255194
Hsu, Ao-Lin; Feng, Zhaoyang; Hsieh, Meng-Yin; Xu, X Z Shawn
2009-09-01
One challenge in aging research concerns identifying physiological parameters or biomarkers that can reflect the physical health of an animal and predict its lifespan. In C. elegans, a model organism widely used in aging research, motor deficits develop in old worms. Here we employed machine vision to quantify worm locomotion behavior throughout lifespan. We confirm that aging worms undergo a progressive decline in motor activity, beginning in early life. Importantly, the rate of motor activity decline rather than the absolute motor activity in the early-to-mid life of individual worms in an isogenic population inversely correlates with their lifespan, and thus may serve as a lifespan predictor. Long-lived mutant strains with deficits in insulin/IGF-1 signaling or food intake display a reduction in the rate of motor activity decline, suggesting that this parameter might also be used for across-strain comparison of healthspan. Our work identifies an endogenous physiological parameter for lifespan prediction and healthspan comparison.
Activity Recognition in Egocentric video using SVM, kNN and Combined SVMkNN Classifiers
NASA Astrophysics Data System (ADS)
Sanal Kumar, K. P.; Bhavani, R., Dr.
2017-08-01
Egocentric vision is a unique perspective in computer vision which is human centric. The recognition of egocentric actions is a challenging task which helps in assisting elderly people, disabled patients and so on. In this work, life logging activity videos are taken as input. There are 2 categories, first one is the top level and second one is second level. Here, the recognition is done using the features like Histogram of Oriented Gradients (HOG), Motion Boundary Histogram (MBH) and Trajectory. The features are fused together and it acts as a single feature. The extracted features are reduced using Principal Component Analysis (PCA). The features that are reduced are provided as input to the classifiers like Support Vector Machine (SVM), k nearest neighbor (kNN) and combined Support Vector Machine (SVM) and k Nearest Neighbor (kNN) (combined SVMkNN). These classifiers are evaluated and the combined SVMkNN provided better results than other classifiers in the literature.
Sun, Xin; Young, Jennifer; Liu, Jeng-Hung; Newman, David
2018-06-01
The objective of this project was to develop a computer vision system (CVS) for objective measurement of pork loin under industry speed requirement. Color images of pork loin samples were acquired using a CVS. Subjective color and marbling scores were determined according to the National Pork Board standards by a trained evaluator. Instrument color measurement and crude fat percentage were used as control measurements. Image features (18 color features; 1 marbling feature; 88 texture features) were extracted from whole pork loin color images. Artificial intelligence prediction model (support vector machine) was established for pork color and marbling quality grades. The results showed that CVS with support vector machine modeling reached the highest prediction accuracy of 92.5% for measured pork color score and 75.0% for measured pork marbling score. This research shows that the proposed artificial intelligence prediction model with CVS can provide an effective tool for predicting color and marbling in the pork industry at online speeds. Copyright © 2018 Elsevier Ltd. All rights reserved.
Fish swarm intelligent to optimize real time monitoring of chips drying using machine vision
NASA Astrophysics Data System (ADS)
Hendrawan, Y.; Hawa, L. C.; Damayanti, R.
2018-03-01
This study attempted to apply machine vision-based chips drying monitoring system which is able to optimise the drying process of cassava chips. The objective of this study is to propose fish swarm intelligent (FSI) optimization algorithms to find the most significant set of image features suitable for predicting water content of cassava chips during drying process using artificial neural network model (ANN). Feature selection entails choosing the feature subset that maximizes the prediction accuracy of ANN. Multi-Objective Optimization (MOO) was used in this study which consisted of prediction accuracy maximization and feature-subset size minimization. The results showed that the best feature subset i.e. grey mean, L(Lab) Mean, a(Lab) energy, red entropy, hue contrast, and grey homogeneity. The best feature subset has been tested successfully in ANN model to describe the relationship between image features and water content of cassava chips during drying process with R2 of real and predicted data was equal to 0.9.
Using human brain activity to guide machine learning.
Fong, Ruth C; Scheirer, Walter J; Cox, David D
2018-03-29
Machine learning is a field of computer science that builds algorithms that learn. In many cases, machine learning algorithms are used to recreate a human ability like adding a caption to a photo, driving a car, or playing a game. While the human brain has long served as a source of inspiration for machine learning, little effort has been made to directly use data collected from working brains as a guide for machine learning algorithms. Here we demonstrate a new paradigm of "neurally-weighted" machine learning, which takes fMRI measurements of human brain activity from subjects viewing images, and infuses these data into the training process of an object recognition learning algorithm to make it more consistent with the human brain. After training, these neurally-weighted classifiers are able to classify images without requiring any additional neural data. We show that our neural-weighting approach can lead to large performance gains when used with traditional machine vision features, as well as to significant improvements with already high-performing convolutional neural network features. The effectiveness of this approach points to a path forward for a new class of hybrid machine learning algorithms which take both inspiration and direct constraints from neuronal data.
NASA Astrophysics Data System (ADS)
He, Ye; Chen, Xiaoan; Liu, Zhi; Qin, Yi
2018-06-01
The motorized spindle is the core component of CNC machine tools, and the vibration of it reduces the machining precision and service life of the machine tools. Owing to the fast response, large output force, and displacement of the piezoelectric stack, it is often used as the actuator in the active vibration control of the spindle. A piezoelectric self-sensing actuator (SSA) can reduce the cost of the active vibration control system and simplify the structure by eliminating the use of a sensor, because a SSA can have both actuating and sensing functions at the same time. The signal separation method of a SSA based on a bridge circuit is widely applied because of its simple principle and easy implementation. However, it is difficult to maintain dynamic balance of the circuit. Prior research has used adaptive algorithm to balance of the bridge circuit on the flexible beam dynamically, but those algorithms need no correlation between sensing and control voltage, which limit the applications of SSA in the vibration control of the rotor-bearing system. Here, the electromechanical coupling model of the piezoelectric stack is established, followed by establishment of the dynamic model of the spindle system. Next, a new adaptive signal separation method based on the bridge circuit is proposed, which can separate relative small sensing voltage from related mixed voltage adaptively. The experimental results show that when the self-sensing signal obtained from the proposed method is used as a displacement signal, the vibration of the motorized spindle can be suppressed effectively through a linear quadratic Gaussian (LQG) algorithm.
Chen, Xiaomei; Longstaff, Andrew; Fletcher, Simon; Myers, Alan
2014-04-01
This paper presents and evaluates an active dual-sensor autofocusing system that combines an optical vision sensor and a tactile probe for autofocusing on arrays of small holes on freeform surfaces. The system has been tested on a two-axis test rig and then integrated onto a three-axis computer numerical control (CNC) milling machine, where the aim is to rapidly and controllably measure the hole position errors while the part is still on the machine. The principle of operation is for the tactile probe to locate the nominal positions of holes, and the optical vision sensor follows to focus and capture the images of the holes. The images are then processed to provide hole position measurement. In this paper, the autofocusing deviations are analyzed. First, the deviations caused by the geometric errors of the axes on which the dual-sensor unit is deployed are estimated to be 11 μm when deployed on a test rig and 7 μm on the CNC machine tool. Subsequently, the autofocusing deviations caused by the interaction of the tactile probe, surface, and small hole are mathematically analyzed and evaluated. The deviations are a result of the tactile probe radius, the curvatures at the positions where small holes are drilled on the freeform surface, and the effect of the position error of the hole on focusing. An example case study is provided for the measurement of a pattern of small holes on an elliptical cylinder on the two machines. The absolute sum of the autofocusing deviations is 118 μm on the test rig and 144 μm on the machine tool. This is much less than the 500 μm depth of field of the optical microscope. Therefore, the method is capable of capturing a group of clear images of the small holes on this workpiece for either implementation.
Remote sensing in the coming decade: the vision and the reality
NASA Astrophysics Data System (ADS)
Gail, William B.
2006-08-01
Investment in understanding the Earth pays off twice. It enables pursuit of scientific questions that rank among the most interesting and profound of our time. It also serves society's practical need for increased prosperity and security. Over the last half-century, we have built a sophisticated network of satellites, aircraft, and ground-based remote sensing systems to provide the raw information from which we derive Earth knowledge. This network has served us well in the development of science and the provision of operational services. In the next decade, the demand for such information will grow dramatically. New remote sensing capabilities will emerge. Rapid evolution of Internet geospatial and location-based services will make communication and sharing of Earth knowledge much easier. Governments, businesses, and consumers will all benefit. But this exciting future is threatened from many directions. Risks range from technology and market uncertainties in the private sector to budget cuts and project setbacks in the public sector. The coming decade will see a dramatic confrontation between the vision of what needs to be accomplished in Earth remote sensing and the reality of our resources and commitment. The outcome will have long-term implications for both the remote sensing community and society as a whole.
Whatever Happened to Joint Vision 2010
2010-04-02
submitted to the Faculty of the Joint Advanced Warfighting School in partial satisfaction of the requirements of a Master of Science Degree in Joint...Service efforts and evolve “jointness” beyond the dictates of the 1986 Goldwater-Nichols Act. JV2010 delineated a common set of environmental ...towards something that is fresh, new and important. In this sense, the term vision may have been the wrong term. Warren Bennis and Burt Nanus, noted
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. (From left) Dr. Julian Earls, director of NASA Glenn Research Center, astronaut Leland Melvin, Sara Thompson, team lead, and KSC Deputy Director Dr. Woodrow Whitlow Jr. pose for a photo at Ronald E. McNair High School in Atlanta, a NASA Explorer School, after a presentation. Dr. Whitlow visited the school to share The vision for space exploration with the next generation of explorers. Whitlow talked with students about our destiny as explorers, NASAs stepping stone approach to exploring Earth, the Moon, Mars and beyond, how space impacts our lives, and how people and machines rely on each other in space. Dr. Earls discussed the future and the vision for space, plus the NASA careers needed to meet the vision. Melvin talked about the importance of teamwork and what it takes for mission success.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. Dr. Julian Earls, director of the NASA Glenn Research Center, talks to students at Ronald E. McNair High School in Atlanta, a NASA Explorer School. He accompanied KSC Deputy Director Dr. Woodrow Whitlow Jr., who is visiting to the school to share the vision for space exploration with the next generation of explorers. Dr. Earls discussed the future and the vision for space, plus the NASA careers needed to meet the vision. Astronaut Leland Melvin (far right) accompanied Whitlow, talking with students about the importance of teamwork and what it takes for mission success. Whitlow talked with students about our destiny as explorers, NASAs stepping stone approach to exploring Earth, the Moon, Mars and beyond, how space impacts our lives, and how people and machines rely on each other in space.
Fuzzy Petri nets to model vision system decisions within a flexible manufacturing system
NASA Astrophysics Data System (ADS)
Hanna, Moheb M.; Buck, A. A.; Smith, R.
1994-10-01
The paper presents a Petri net approach to modelling, monitoring and control of the behavior of an FMS cell. The FMS cell described comprises a pick and place robot, vision system, CNC-milling machine and 3 conveyors. The work illustrates how the block diagrams in a hierarchical structure can be used to describe events at different levels of abstraction. It focuses on Fuzzy Petri nets (Fuzzy logic with Petri nets) including an artificial neural network (Fuzzy Neural Petri nets) to model and control vision system decisions and robot sequences within an FMS cell. This methodology can be used as a graphical modelling tool to monitor and control the imprecise, vague and uncertain situations, and determine the quality of the output product of an FMS cell.
NASA Astrophysics Data System (ADS)
Baskoro, Ario Sunar; Kabutomori, Masashi; Suga, Yasuo
An automatic welding system using Tungsten Inert Gas (TIG) welding with vision sensor for welding of aluminum pipe was constructed. This research studies the intelligent welding process of aluminum alloy pipe 6063S-T5 in fixed position and moving welding torch with the AC welding machine. The monitoring system consists of a vision sensor using a charge-coupled device (CCD) camera to monitor backside image of molten pool. The captured image was processed to recognize the edge of molten pool by image processing algorithm. Neural network model for welding speed control were constructed to perform the process automatically. From the experimental results it shows the effectiveness of the control system confirmed by good detection of molten pool and sound weld of experimental result.
LED lighting for use in multispectral and hyperspectral imaging
USDA-ARS?s Scientific Manuscript database
Lighting for machine vision and hyperspectral imaging is an important component for collecting high quality imagery. However, it is often given minimal consideration in the overall design of an imaging system. Tungsten-halogens lamps are the most common source of illumination for broad spectrum appl...
Evaluation and implementation of a machine vision system to categorize extraneous matter in cotton
USDA-ARS?s Scientific Manuscript database
The Cotton Trash Identification System (CTIS) developed at the Southwestern Cotton Ginning Research Laboratory was evaluated for identification and categorization of extraneous matter (EM) in cotton. The system’s categorization of trash objects in cotton images was evaluated against Agricultural Mar...
USDA-ARS?s Scientific Manuscript database
Currently, blueberries are inspected and sorted by color, size and/or firmness (or softness) in packinghouses, using different inspection techniques like machine vision and mechanical vibration or impact. A new inspection technique is needed for effectively assessing both external features and inter...
Non-contact capacitance based image sensing method and system
Novak, J.L.; Wiczer, J.J.
1994-01-25
A system and a method for imaging desired surfaces of a workpiece is described. A sensor having first and second sensing electrodes which are electrically isolated from the workpiece is positioned above and in proximity to the desired surfaces of the workpiece. An electric field is developed between the first and second sensing electrodes of the sensor in response to input signals being applied thereto and capacitance signals are developed which are indicative of any disturbances in the electric field as a result of the workpiece. An image signal of the workpiece may be developed by processing the capacitance signals. The image signals may provide necessary control information to a machining device for machining the desired surfaces of the workpiece in processes such as deburring or chamfering. Also, the method and system may be used to image dimensions of weld pools on a workpiece and surfaces of glass vials. The sensor may include first and second preview sensors used to determine the feed rate of a workpiece with respect to the machining device. 18 figures.
Non-contact capacitance based image sensing method and system
Novak, J.L.; Wiczer, J.J.
1995-01-03
A system and a method is provided for imaging desired surfaces of a workpiece. A sensor having first and second sensing electrodes which are electrically isolated from the workpiece is positioned above and in proximity to the desired surfaces of the workpiece. An electric field is developed between the first and second sensing electrodes of the sensor in response to input signals being applied thereto and capacitance signals are developed which are indicative of any disturbances in the electric field as a result of the workpiece. An image signal of the workpiece may be developed by processing the capacitance signals. The image signals may provide necessary control information to a machining device for machining the desired surfaces of the workpiece in processes such as deburring or chamfering. Also, the method and system may be used to image dimensions of weld pools on a workpiece and surfaces of glass vials. The sensor may include first and second preview sensors used to determine the feed rate of a workpiece with respect to the machining device. 18 figures.
Sensing Super-position: Visual Instrument Sensor Replacement
NASA Technical Reports Server (NTRS)
Maluf, David A.; Schipper, John F.
2006-01-01
The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This project addresses the technical feasibility of augmenting human vision through Sensing Super-position using a Visual Instrument Sensory Organ Replacement (VISOR). The current implementation of the VISOR device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of the human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an image-to-sound mapping system.
The role of vision for navigation in the crown-of-thorns seastar, Acanthaster planci
Sigl, Robert; Steibl, Sebastian; Laforsch, Christian
2016-01-01
Coral reefs all over the Indo-Pacific suffer from substantial damage caused by the crown-of-thorns seastar Acanthaster planci, a voracious predator that moves on and between reefs to seek out its coral prey. Chemoreception is thought to guide A. planci. As vision was recently introduced as another sense involved in seastar navigation, we investigated the potential role of vision for navigation in A. planci. We estimated the spatial resolution and visual field of the compound eye using histological sections and morphometric measurements. Field experiments in a semi-controlled environment revealed that vision in A. planci aids in finding reef structures at a distance of at least 5 m, whereas chemoreception seems to be effective only at very short distances. Hence, vision outweighs chemoreception at intermediate distances. A. planci might use vision to navigate between reef structures and to locate coral prey, therefore improving foraging efficiency, especially when multidirectional currents and omnipresent chemical cues on the reef hamper chemoreception. PMID:27476750
Switching Circuit for Shop Vacuum System
NASA Technical Reports Server (NTRS)
Burley, R. K.
1987-01-01
No internal connections to machine tools required. Switching circuit controls vacuum system draws debris from grinders and sanders in machine shop. Circuit automatically turns on vacuum system whenever at least one sander or grinder operating. Debris safely removed, even when operator neglects to turn on vacuum system manually. Pickup coils sense alternating magnetic fields just outside operating machines. Signal from any coil or combination of coils causes vacuum system to be turned on.
Time of Flight Estimation in the Presence of Outliers: A Biosonar-Inspired Machine Learning Approach
2013-08-29
REPORT Time of Flight Estimation in the Presence of Outliers: A biosonar -inspired machine learning approach 14. ABSTRACT 16. SECURITY CLASSIFICATION OF...installations, biosonar , remote sensing, sonar resolution, sonar accuracy, sonar energy consumption Nathan Intrator, Leon N Cooper Brown University...Presence of Outliers: A biosonar -inspired machine learning approach Report Title ABSTRACT When the Signal-to-Noise Ratio (SNR) falls below a certain
The Impacts of Industrial Robots
1981-11-01
plastics, ’and strain gauges are used to measure very small forces at a number of points on the robot’s "end effector. Except for the simplest on-off...devices, tactile sensors are not yet found on commercially available robots. Forces are sensed by using strain gauges or piezoelectric sensors to...tools: deburring, drilling , grinding,milling,routing machines ii. plastic materialsformirg and injection machines iii. metal die casting machines iv
Biosleeve Human-Machine Interface
NASA Technical Reports Server (NTRS)
Assad, Christopher (Inventor)
2016-01-01
Systems and methods for sensing human muscle action and gestures in order to control machines or robotic devices are disclosed. One exemplary system employs a tight fitting sleeve worn on a user arm and including a plurality of electromyography (EMG) sensors and at least one inertial measurement unit (IMU). Power, signal processing, and communications electronics may be built into the sleeve and control data may be transmitted wirelessly to the controlled machine or robotic device.
The World Hypotheses: Implications for Intercultural Communication Research.
ERIC Educational Resources Information Center
Ting-Toomey, Stella
The "sense making" process structures humans' categorization, perceptual, and expressive processes. These "sense making" references are ultimately derived from four distinct "root metaphors": mechanism or mechanistic thinking (machine), formism or formistic thinking (similarity), organicism or organistic thinking (organic process), and…
Sensory Interactive Teleoperator Robotic Grasping
NASA Technical Reports Server (NTRS)
Alark, Keli; Lumia, Ron
1997-01-01
As the technological world strives for efficiency, the need for economical equipment that increases operator proficiency in minimal time is fundamental. This system links a CCD camera, a controller and a robotic arm to a computer vision system to provide an alternative method of image analysis. The machine vision system which was employed possesses software tools for acquiring and analyzing images which are received through a CCD camera. After feature extraction on the object in the image was performed, information about the object's location, orientation and distance from the robotic gripper is sent to the robot controller so that the robot can manipulate the object.
Sissons, B; Gray, W A; Bater, A; Morrey, D
2007-03-01
The vision of evidence-based medicine is that of experienced clinicians systematically using the best research evidence to meet the individual patient's needs. This vision remains distant from clinical reality, as no complete methodology exists to apply objective, population-based research evidence to the needs of an individual real-world patient. We describe an approach, based on techniques from machine learning, to bridge this gap between evidence and individual patients in oncology. We examine existing proposals for tackling this gap and the relative benefits and challenges of our proposed, k-nearest-neighbour-based, approach.
Adaptive Feedback in Local Coordinates for Real-time Vision-Based Motion Control Over Long Distances
NASA Astrophysics Data System (ADS)
Aref, M. M.; Astola, P.; Vihonen, J.; Tabus, I.; Ghabcheloo, R.; Mattila, J.
2018-03-01
We studied the differences in noise-effects, depth-correlated behavior of sensors, and errors caused by mapping between coordinate systems in robotic applications of machine vision. In particular, the highly range-dependent noise densities for semi-unknown object detection were considered. An equation is proposed to adapt estimation rules to dramatic changes of noise over longer distances. This algorithm also benefits the smooth feedback of wheels to overcome variable latencies of visual perception feedback. Experimental evaluation of the integrated system is presented with/without the algorithm to highlight its effectiveness.
Survey on diagnosis of diseases from retinal images
NASA Astrophysics Data System (ADS)
Das, Sneha; Malathy, C.
2018-04-01
Retina is a thin membranous layer of tissue that occupies at the back of the eye which provides central vision needed for daily routines. Identifying retinal diseases at the early stage is a challenging task since healthy retina is required for central vision. Several retinal diseases affect the eye such as retinal tear, retinal detachment, glaucoma, macular hole and macular degeneration etc. These maladies will encounter a secondary growth in the close future as the age of the person increases. A survey is made which tells about the diagnosis of the retinal diseases from the retinal images using machine learning techniques.
Navarro, Pedro J.; Fernández, Carlos; Borraz, Raúl; Alonso, Diego
2016-01-01
This article describes an automated sensor-based system to detect pedestrians in an autonomous vehicle application. Although the vehicle is equipped with a broad set of sensors, the article focuses on the processing of the information generated by a Velodyne HDL-64E LIDAR sensor. The cloud of points generated by the sensor (more than 1 million points per revolution) is processed to detect pedestrians, by selecting cubic shapes and applying machine vision and machine learning algorithms to the XY, XZ, and YZ projections of the points contained in the cube. The work relates an exhaustive analysis of the performance of three different machine learning algorithms: k-Nearest Neighbours (kNN), Naïve Bayes classifier (NBC), and Support Vector Machine (SVM). These algorithms have been trained with 1931 samples. The final performance of the method, measured a real traffic scenery, which contained 16 pedestrians and 469 samples of non-pedestrians, shows sensitivity (81.2%), accuracy (96.2%) and specificity (96.8%). PMID:28025565
Navarro, Pedro J; Fernández, Carlos; Borraz, Raúl; Alonso, Diego
2016-12-23
This article describes an automated sensor-based system to detect pedestrians in an autonomous vehicle application. Although the vehicle is equipped with a broad set of sensors, the article focuses on the processing of the information generated by a Velodyne HDL-64E LIDAR sensor. The cloud of points generated by the sensor (more than 1 million points per revolution) is processed to detect pedestrians, by selecting cubic shapes and applying machine vision and machine learning algorithms to the XY, XZ, and YZ projections of the points contained in the cube. The work relates an exhaustive analysis of the performance of three different machine learning algorithms: k-Nearest Neighbours (kNN), Naïve Bayes classifier (NBC), and Support Vector Machine (SVM). These algorithms have been trained with 1931 samples. The final performance of the method, measured a real traffic scenery, which contained 16 pedestrians and 469 samples of non-pedestrians, shows sensitivity (81.2%), accuracy (96.2%) and specificity (96.8%).
A Master Trainer Class for Professionals in Teaching the UltraCane Electronic Travel Device
ERIC Educational Resources Information Center
Penrod, William; Corbett, Michael D.; Blasch, Bruce
2005-01-01
Electronic travel devices are used to transform information about the environment that would normally be perceived through the visual sense into a form that can be perceived by people who are blind or have low vision through another sense (Blasch, Long, & Griffin-Shirley, 1989). They are divided into two broad categories: primary devices and…
NASA Astrophysics Data System (ADS)
Lee, Hyunki; Kim, Min Young; Moon, Jeon Il
2017-12-01
Phase measuring profilometry and moiré methodology have been widely applied to the three-dimensional shape measurement of target objects, because of their high measuring speed and accuracy. However, these methods suffer from inherent limitations called a correspondence problem, or 2π-ambiguity problem. Although a kind of sensing method to combine well-known stereo vision and phase measuring profilometry (PMP) technique simultaneously has been developed to overcome this problem, it still requires definite improvement for sensing speed and measurement accuracy. We propose a dynamic programming-based stereo PMP method to acquire more reliable depth information and in a relatively small time period. The proposed method efficiently fuses information from two stereo sensors in terms of phase and intensity simultaneously based on a newly defined cost function of dynamic programming. In addition, the important parameters are analyzed at the view point of the 2π-ambiguity problem and measurement accuracy. To analyze the influence of important hardware and software parameters related to the measurement performance and to verify its efficiency, accuracy, and sensing speed, a series of experimental tests were performed with various objects and sensor configurations.
Online dating and conjugal bereavement.
Young, Goldthwaite Dannagal; Caplan, Scott E
2010-08-01
This study examined self-presentation in the online dating profiles of 241 widowed and 280 divorced individuals between 18 and 40 years old. A content analysis of open-ended user-generated profiles assessed the presence or absence of various themes, including the user's marital status, the backstory of their lost relationship, and whether they engaged in sense-making regarding that lost relationship. Results indicated that about one-third of widowed individuals discussed their loss in their profiles. In addition, about one-third of the widowed profiles included explicit reference to a philosophy of life, and about 16% mentioned sense-making or cognitive reappraisals of their bereavement. Many profiles included some articulation of a vision of a future partnership. Results also revealed a significant correlation between widowed individuals including a backstory and their likelihood of exhibiting sense-making in their profiles. Finally, unlike the widowed users, divorcees provided much briefer mentions of their lost relationships, used less sense-making language, and were less likely to articulate an explicit vision of future partnerships. Overall, the results suggest that for widowed individuals, online dating sites may function as venues to explore their past experiences and engage in the construction of a post-loss identity or a post-loss "ideal self".
Micro Fluidic Channel Machining on Fused Silica Glass Using Powder Blasting
Jang, Ho-Su; Cho, Myeong-Woo; Park, Dong-Sam
2008-01-01
In this study, micro fluid channels are machined on fused silica glass via powder blasting, a mechanical etching process, and the machining characteristics of the channels are experimentally evaluated. In the process, material removal is performed by the collision of micro abrasives injected by highly compressed air on to the target surface. This approach can be characterized as an integration of brittle mode machining based on micro crack propagation. Fused silica glass, a high purity synthetic amorphous silicon dioxide, is selected as a workpiece material. It has a very low thermal expansion coefficient and excellent optical qualities and exceptional transmittance over a wide spectral range, especially in the ultraviolet range. The powder blasting process parameters affecting the machined results are injection pressure, abrasive particle size and density, stand-off distance, number of nozzle scanning, and shape/size of the required patterns. In this study, the influence of the number of nozzle scanning, abrasive particle size, and pattern size on the formation of micro channels is investigated. Machined shapes and surface roughness are measured using a 3-dimensional vision profiler and the results are discussed. PMID:27879730
The Influence of the Size Styles for the Touching Probe for the Roundness of the Deviation
NASA Astrophysics Data System (ADS)
Mizera, Ondrej; Cepova, Lenka
2017-12-01
The article deals with the influence of the sensing touch on the deviation of circularity on the WENZEL LH 65 X3M on 3D machining and using the Metrosoft QUARTIZ R6 software at the VŠB-TU Ostrava Laboratory, Faculty of Mechanical Engineering, Department of Machining, Assembly and engineering metrology. The aim was to analyze the influence of individual sensors on the different lengths of the pin and the diameter of the sensing metod on the accuracy of the measurement of the variance of the circularity.
Vision sensor and dual MEMS gyroscope integrated system for attitude determination on moving base
NASA Astrophysics Data System (ADS)
Guo, Xiaoting; Sun, Changku; Wang, Peng; Huang, Lu
2018-01-01
To determine the relative attitude between the objects on a moving base and the base reference system by a MEMS (Micro-Electro-Mechanical Systems) gyroscope, the motion information of the base is redundant, which must be removed from the gyroscope. Our strategy is to add an auxiliary gyroscope attached to the reference system. The master gyroscope is to sense the total motion, and the auxiliary gyroscope is to sense the motion of the moving base. By a generalized difference method, relative attitude in a non-inertial frame can be determined by dual gyroscopes. With the vision sensor suppressing accumulative drift of the MEMS gyroscope, the vision and dual MEMS gyroscope integration system is formed. Coordinate system definitions and spatial transform are executed in order to fuse inertial and visual data from different coordinate systems together. And a nonlinear filter algorithm, Cubature Kalman filter, is used to fuse slow visual data and fast inertial data together. A practical experimental setup is built up and used to validate feasibility and effectiveness of our proposed attitude determination system in the non-inertial frame on the moving base.
Nonlinear programming for classification problems in machine learning
NASA Astrophysics Data System (ADS)
Astorino, Annabella; Fuduli, Antonio; Gaudioso, Manlio
2016-10-01
We survey some nonlinear models for classification problems arising in machine learning. In the last years this field has become more and more relevant due to a lot of practical applications, such as text and web classification, object recognition in machine vision, gene expression profile analysis, DNA and protein analysis, medical diagnosis, customer profiling etc. Classification deals with separation of sets by means of appropriate separation surfaces, which is generally obtained by solving a numerical optimization model. While linear separability is the basis of the most popular approach to classification, the Support Vector Machine (SVM), in the recent years using nonlinear separating surfaces has received some attention. The objective of this work is to recall some of such proposals, mainly in terms of the numerical optimization models. In particular we tackle the polyhedral, ellipsoidal, spherical and conical separation approaches and, for some of them, we also consider the semisupervised versions.
NASA Astrophysics Data System (ADS)
Rückwardt, M.; Göpfert, A.; Schnellhorn, M.; Correns, M.; Rosenberger, M.; Linß, G.
2010-07-01
Precise measuring of spectacle frames is an important field of quality assurance for opticians and their customers. Different supplier and a number of measuring methods are available but all of them are tactile ones. In this paper the possible employment of optical coordinate measuring machines is discussed for detecting the groove of a spectacle frame. The ambient conditions like deviation and measuring time are even multifaceted like quantity of quality characteristics and measuring objects itself and have to be tested. But the main challenge for an optical coordinate measuring machine is the blocked optical path, because the device under test is located behind an undercut. In this case it is necessary to deflect the beam of the machine for example with a rotating plane mirror. In the next step the difficulties of machine vision connecting to the spectacle frame are explained. Finally first results are given.
Development of techniques to enhance man/machine communication
NASA Technical Reports Server (NTRS)
Targ, R.; Cole, P.; Puthoff, H.
1974-01-01
A four-state random stimulus generator, considered to function as an ESP teaching machine was used to investigate an approach to facilitating interactions between man and machines. A subject tries to guess in which of four states the machine is. The machine offers the user feedback and reinforcement as to the correctness of his choice. Using this machine, 148 volunteer subjects were screened under various protocols. Several whose learning slope and/or mean score departed significantly from chance expectation were identified. Direct physiological evidence of perception of remote stimuli not presented to any known sense of the percipient using electroencephalographic (EEG) output when a light was flashed in a distant room was also studied.
ERIC Educational Resources Information Center
National Academy of Sciences - National Research Council, Washington, DC. Committee on Prosthetics Research and Development.
The problems of providing sensory aids for the blind are presented and a report on the present status of aids discusses direct translation and recognition reading machines as well as mobility aids. Aspects of required research considered are the following: assessment of needs; vision, audition, taction, and multimodal communication; reading aids,…
Geometric Invariants and Object Recognition.
1992-08-01
University of Chicago Press. Maybank , S.J. [1992], "The Projection of Two Non-coplanar Conics", in Geometric Invariance in Machine Vision, eds. J.L...J.L. Mundy and A. Zisserman, MIT Press, Cambridge, MA. Mundy, J.L., Kapur, .. , Maybank , S.J., and Quan, L. [1992a] "Geometric Inter- pretation of
The Need for Alternative Paradigms in Science and Engineering Education
ERIC Educational Resources Information Center
Baggi, Dennis L.
2007-01-01
There are two main claims in this article. First, that the classic pillars of engineering education, namely, traditional mathematics and differential equations, are merely a particular, if not old-fashioned, representation of a broader mathematical vision, which spans from Turing machine programming and symbolic productions sets to sub-symbolic…
Artificial Intelligence and the High School Computer Curriculum.
ERIC Educational Resources Information Center
Dillon, Richard W.
1993-01-01
Describes a four-part curriculum that can serve as a model for incorporating artificial intelligence (AI) into the high school computer curriculum. The model includes examining questions fundamental to AI, creating and designing an expert system, language processing, and creating programs that integrate machine vision with robotics and…
2001-09-01
diagnosis natural language understanding circuit fault diagnosis pattern recognition machine vision nancial auditing map learning sensor... ACCA ACCB A ights degree of command and control FCC value is assumed to be the average of all the ACC values of the aircraft in the
ICPR-2016 - International Conference on Pattern Recognition
Learning for Scene Understanding" Speakers ICPR2016 PAPER AWARDS Best Piero Zamperoni Student Paper -Paced Dictionary Learning for Cross-Domain Retrieval and Recognition Xu, Dan; Song, Jingkuan; Alameda discussions on recent advances in the fields of Pattern Recognition, Machine Learning and Computer Vision, and
NASA Astrophysics Data System (ADS)
Hildreth, E. C.
1985-09-01
For both biological systems and machines, vision begins with a large and unwieldly array of measurements of the amount of light reflected from surfaces in the environment. The goal of vision is to recover physical properties of objects in the scene such as the location of object boundaries and the structure, color and texture of object surfaces, from the two-dimensional image that is projected onto the eye or camera. This goal is not achieved in a single step: vision proceeds in stages, with each stage producing increasingly more useful descriptions of the image and then the scene. The first clues about the physical properties of the scene are provided by the changes of intensity in the image. The importance of intensity changes and edges in early visual processing has led to extensive research on their detection, description and use, both in computer and biological vision systems. This article reviews some of the theory that underlies the detection of edges, and the methods used to carry out this analysis.
Square tracking sensor for autonomous helicopter hover stabilization
NASA Astrophysics Data System (ADS)
Oertel, Carl-Henrik
1995-06-01
Sensors for synthetic vision are needed to extend the mission profiles of helicopters. A special task for various applications is the autonomous position hold of a helicopter above a ground fixed or moving target. As a proof of concept for a general synthetic vision solution a restricted machine vision system, which is capable of locating and tracking a special target, was developed by the Institute of Flight Mechanics of Deutsche Forschungsanstalt fur Luft- und Raumfahrt e.V. (i.e., German Aerospace Research Establishment). This sensor, which is specialized to detect and track a square, was integrated in the fly-by-wire helicopter ATTHeS (i.e., Advanced Technology Testing Helicopter System). An existing model following controller for the forward flight condition was adapted for the hover and low speed requirements of the flight vehicle. The special target, a black square with a length of one meter, was mounted on top of a car. Flight tests demonstrated the automatic stabilization of the helicopter above the moving car by synthetic vision.
Vision-based aircraft guidance
NASA Technical Reports Server (NTRS)
Menon, P. K.
1993-01-01
Early research on the development of machine vision algorithms to serve as pilot aids in aircraft flight operations is discussed. The research is useful for synthesizing new cockpit instrumentation that can enhance flight safety and efficiency. With the present work as the basis, future research will produce low-cost instrument by integrating a conventional TV camera together with off-the=shelf digitizing hardware for flight test verification. Initial focus of the research will be on developing pilot aids for clear-night operations. Latter part of the research will examine synthetic vision issues for poor visibility flight operations. Both research efforts will contribute towards the high-speed civil transport aircraft program. It is anticipated that the research reported here will also produce pilot aids for conducting helicopter flight operations during emergency search and rescue. The primary emphasis of the present research effort is on near-term, flight demonstrable technologies. This report discusses pilot aids for night landing and takeoff and synthetic vision as an aid to low visibility landing.
User requirements for project-oriented remote sensing
NASA Technical Reports Server (NTRS)
Hitchcock, H. C.; Baxter, F. P.; Cox, T. L.
1975-01-01
Registration of remotely sensed data to geodetic coordinates provides for overlay analysis of land use data. For aerial photographs of a large area, differences in scales, dates, and film types are reconciled, and multispectral scanner data are machine registered at the time of acquisition.
2013-01-01
intelligently selecting waveform parameters using adaptive algorithms. The adaptive algorithms optimize the waveform parameters based on (1) the EM...the environment. 15. SUBJECT TERMS cognitive radar, adaptive sensing, spectrum sensing, multi-objective optimization, genetic algorithms, machine...detection and classification block diagram. .........................................................6 Figure 5. Genetic algorithm block diagram
Bradbury, Kyle; Saboo, Raghav; L. Johnson, Timothy; Malof, Jordan M.; Devarajan, Arjun; Zhang, Wuming; M. Collins, Leslie; G. Newell, Richard
2016-01-01
Earth-observing remote sensing data, including aerial photography and satellite imagery, offer a snapshot of the world from which we can learn about the state of natural resources and the built environment. The components of energy systems that are visible from above can be automatically assessed with these remote sensing data when processed with machine learning methods. Here, we focus on the information gap in distributed solar photovoltaic (PV) arrays, of which there is limited public data on solar PV deployments at small geographic scales. We created a dataset of solar PV arrays to initiate and develop the process of automatically identifying solar PV locations using remote sensing imagery. This dataset contains the geospatial coordinates and border vertices for over 19,000 solar panels across 601 high-resolution images from four cities in California. Dataset applications include training object detection and other machine learning algorithms that use remote sensing imagery, developing specific algorithms for predictive detection of distributed PV systems, estimating installed PV capacity, and analysis of the socioeconomic correlates of PV deployment. PMID:27922592
NASA Astrophysics Data System (ADS)
Bradbury, Kyle; Saboo, Raghav; L. Johnson, Timothy; Malof, Jordan M.; Devarajan, Arjun; Zhang, Wuming; M. Collins, Leslie; G. Newell, Richard
2016-12-01
Earth-observing remote sensing data, including aerial photography and satellite imagery, offer a snapshot of the world from which we can learn about the state of natural resources and the built environment. The components of energy systems that are visible from above can be automatically assessed with these remote sensing data when processed with machine learning methods. Here, we focus on the information gap in distributed solar photovoltaic (PV) arrays, of which there is limited public data on solar PV deployments at small geographic scales. We created a dataset of solar PV arrays to initiate and develop the process of automatically identifying solar PV locations using remote sensing imagery. This dataset contains the geospatial coordinates and border vertices for over 19,000 solar panels across 601 high-resolution images from four cities in California. Dataset applications include training object detection and other machine learning algorithms that use remote sensing imagery, developing specific algorithms for predictive detection of distributed PV systems, estimating installed PV capacity, and analysis of the socioeconomic correlates of PV deployment.
Bradbury, Kyle; Saboo, Raghav; L Johnson, Timothy; Malof, Jordan M; Devarajan, Arjun; Zhang, Wuming; M Collins, Leslie; G Newell, Richard
2016-12-06
Earth-observing remote sensing data, including aerial photography and satellite imagery, offer a snapshot of the world from which we can learn about the state of natural resources and the built environment. The components of energy systems that are visible from above can be automatically assessed with these remote sensing data when processed with machine learning methods. Here, we focus on the information gap in distributed solar photovoltaic (PV) arrays, of which there is limited public data on solar PV deployments at small geographic scales. We created a dataset of solar PV arrays to initiate and develop the process of automatically identifying solar PV locations using remote sensing imagery. This dataset contains the geospatial coordinates and border vertices for over 19,000 solar panels across 601 high-resolution images from four cities in California. Dataset applications include training object detection and other machine learning algorithms that use remote sensing imagery, developing specific algorithms for predictive detection of distributed PV systems, estimating installed PV capacity, and analysis of the socioeconomic correlates of PV deployment.
Vargas, Mara Ambrosina de O; Meyer, Dagmar Estermann
2005-06-01
This study discusses the human being-machine relationship in the process called "cyborgzation" of the nurse who works in intensive care, based on post-structuralist Cultural Studies and highlighting Haraway's concept of cyborg. In it, manuals used by nurses in Intensive Care Units have been examined as cultural texts. This cultural analysis tries to decode the various senses of "human" and "machine", with the aim of recognizing processes that turn nurses into cyborgs. The argument is that intensive care nurses fall into a process of "technology embodiment" that turns the body-professional into a hybrid that makes possible to disqualify, at the same time, notions such as machine and body "proper", since it is the hybridization between one and the other that counts there. Like cyborgs, intensive care nurses learn to "be with" the machine, and this connection limits the specificity of their actions. It is suggested that processes of "cyborgzation" such as this are useful for questioning - and to deal with in different ways - the senses of "human" and "humanity" that support a major part of knowledge/action in health.
NASA Astrophysics Data System (ADS)
Rhee, Jinyoung; Kim, Gayoung; Im, Jungho
2017-04-01
Three regions of Indonesia with different rainfall characteristics were chosen to develop drought forecast models based on machine learning. The 6-month Standardized Precipitation Index (SPI6) was selected as the target variable. The models' forecast skill was compared to the skill of long-range climate forecast models in terms of drought accuracy and regression mean absolute error (MAE). Indonesian droughts are known to be related to El Nino Southern Oscillation (ENSO) variability despite of regional differences as well as monsoon, local sea surface temperature (SST), other large-scale atmosphere-ocean interactions such as Indian Ocean Dipole (IOD) and Southern Pacific Convergence Zone (SPCZ), and local factors including topography and elevation. Machine learning models are thus to enhance drought forecast skill by combining local and remote SST and remote sensing information reflecting initial drought conditions to the long-range climate forecast model results. A total of 126 machine learning models were developed for the three regions of West Java (JB), West Sumatra (SB), and Gorontalo (GO) and six long-range climate forecast models of MSC_CanCM3, MSC_CanCM4, NCEP, NASA, PNU, POAMA as well as one climatology model based on remote sensing precipitation data, and 1 to 6-month lead times. When compared the results between the machine learning models and the long-range climate forecast models, West Java and Gorontalo regions showed similar characteristics in terms of drought accuracy. Drought accuracy of the long-range climate forecast models were generally higher than the machine learning models with short lead times but the opposite appeared for longer lead times. For West Sumatra, however, the machine learning models and the long-range climate forecast models showed similar drought accuracy. The machine learning models showed smaller regression errors for all three regions especially with longer lead times. Among the three regions, the machine learning models developed for Gorontalo showed the highest drought accuracy and the lowest regression error. West Java showed higher drought accuracy compared to West Sumatra, while West Sumatra showed lower regression error compared to West Java. The lower error in West Sumatra may be because of the smaller sample size used for training and evaluation for the region. Regional differences of forecast skill are determined by the effect of ENSO and the following forecast skill of the long-range climate forecast models. While shown somewhat high in West Sumatra, relative importance of remote sensing variables was mostly low in most cases. High importance of the variables based on long-range climate forecast models indicates that the forecast skill of the machine learning models are mostly determined by the forecast skill of the climate models.
Nian, Rui; Liu, Fang; He, Bo
2013-07-16
Underwater vision is one of the dominant senses and has shown great prospects in ocean investigations. In this paper, a hierarchical Independent Component Analysis (ICA) framework has been established to explore and understand the functional roles of the higher order statistical structures towards the visual stimulus in the underwater artificial vision system. The model is inspired by characteristics such as the modality, the redundancy reduction, the sparseness and the independence in the early human vision system, which seems to respectively capture the Gabor-like basis functions, the shape contours or the complicated textures in the multiple layer implementations. The simulation results have shown good performance in the effectiveness and the consistence of the approach proposed for the underwater images collected by autonomous underwater vehicles (AUVs).
Nian, Rui; Liu, Fang; He, Bo
2013-01-01
Underwater vision is one of the dominant senses and has shown great prospects in ocean investigations. In this paper, a hierarchical Independent Component Analysis (ICA) framework has been established to explore and understand the functional roles of the higher order statistical structures towards the visual stimulus in the underwater artificial vision system. The model is inspired by characteristics such as the modality, the redundancy reduction, the sparseness and the independence in the early human vision system, which seems to respectively capture the Gabor-like basis functions, the shape contours or the complicated textures in the multiple layer implementations. The simulation results have shown good performance in the effectiveness and the consistence of the approach proposed for the underwater images collected by autonomous underwater vehicles (AUVs). PMID:23863855
Corrias, Anna
2018-01-01
In his treatise on dreams Somniorum Synesiorum Libri IIII, published in 1562, the Italian Renaissance philosopher and physician Girolamo Cardano distinguishes between idola and visiones (or visa). Historians have discussed the reasons for such a distinction without taking into account Cardano's original theory of sense-perception. In this article I shall argue that, in order to interpret the meaning of idola and visiones in Cardano's theory of dreams, one should bear in mind his view that hearing is superior to sight and that while idola are essentially based on sound, visiones depend on images.
32 CFR 518.20 - Collection of fees and fee rates.
Code of Federal Regulations, 2011 CFR
2011-07-01
... assessed as computer search. The terms “programmer/operator” shall not be limited to the traditional programmers or operators. Rather, the terms shall be interpreted in their broadest sense to incorporate any..., programmer, database administrator, or action officer). (ii) Machine time. Machine time involves only direct...
Modification of Upper Thread Tensioner of Sewing Machine
NASA Astrophysics Data System (ADS)
Klouček, P.; Škop, P.
Standard mechanical upper thread tensioner of sewing machines is more and more limited in use for industrial sewing machines due to increasing requests for quality and raising velocity of machines. If we omit mostly manual settings of force made only by sense, the most problematic things are influence of different friction coefficient of the different batch of threads and strong relation between thread tension and sewing machine velocity. The article describes the development focused to the elimination of the most significant disadvantages of a standard tensioner and mainly finding of new conception of the tensioner with electromagnetic brake, development and testing of its prototype.
Computer vision for automatic inspection of agricultural produce
NASA Astrophysics Data System (ADS)
Molto, Enrique; Blasco, Jose; Benlloch, Jose V.
1999-01-01
Fruit and vegetables suffer different manipulations from the field to the final consumer. These are basically oriented towards the cleaning and selection of the product in homogeneous categories. For this reason, several research projects, aimed at fast, adequate produce sorting and quality control are currently under development around the world. Moreover, it is possible to find manual and semi- automatic commercial system capable of reasonably performing these tasks.However, in many cases, their accuracy is incompatible with current European market demands, which are constantly increasing. IVIA, the Valencian Research Institute of Agriculture, located in Spain, has been involved in several European projects related with machine vision for real-time inspection of various agricultural produces. This paper will focus on the work related with two products that have different requirements: fruit and olives. In the case of fruit, the Institute has developed a vision system capable of providing assessment of the external quality of single fruit to a robot that also receives information from other senors. The system use four different views of each fruit and has been tested on peaches, apples and citrus. Processing time of each image is under 500 ms using a conventional PC. The system provides information about primary and secondary color, blemishes and their extension, and stem presence and position, which allows further automatic orientation of the fruit in the final box using a robotic manipulator. Work carried out in olives was devoted to fast sorting of olives for consumption at table. A prototype has been developed to demonstrate the feasibility of a machine vision system capable of automatically sorting 2500 kg/h olives using low-cost conventional hardware.
NASA Astrophysics Data System (ADS)
Lin, Chern-Sheng; Chen, Chia-Tse; Shei, Hung-Jung; Lay, Yun-Long; Chiu, Chuang-Chien
2012-09-01
This study develops a body motion interactive system with computer vision technology. This application combines interactive games, art performing, and exercise training system. Multiple image processing and computer vision technologies are used in this study. The system can calculate the characteristics of an object color, and then perform color segmentation. When there is a wrong action judgment, the system will avoid the error with a weight voting mechanism, which can set the condition score and weight value for the action judgment, and choose the best action judgment from the weight voting mechanism. Finally, this study estimated the reliability of the system in order to make improvements. The results showed that, this method has good effect on accuracy and stability during operations of the human-machine interface of the sports training system.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. Principal Albert Sye, astronaut Leland Melvin, Dr. Julian Earls and KSC Deputy Director Dr. Woodrow Whitlow Jr. share the stage at Ronald E. McNair High School in Atlanta, a NASA Explorer School. Dr. Earls is director of the NASA Glenn Research Center. He joined Dr. Whitlow on a visit to the school to share the vision for space exploration with the next generation of explorers. Whitlow talked with students about our destiny as explorers, NASAs stepping stone approach to exploring Earth, the Moon, Mars and beyond, how space impacts our lives, and how people and machines rely on each other in space. Dr. Earls discussed the future and the vision for space, plus the NASA careers needed to meet the vision. Melvin talked about the importance of teamwork and what it takes for mission success.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. KSC Deputy Director Dr. Woodrow Whitlow Jr. talks to students at Ronald E. McNair High School in Atlanta, a NASA Explorer School. He is visiting to the school to share the vision for space exploration with the next generation of explorers. Astronaut Leland Melvin(second from right) accompanied Whitlow, talking with students about the importance of teamwork and what it takes for mission success. Also on the visit was Dr. Julian Earls (far right), director of NASA Glenn Research Center, who discussed the future and the vision for space, plus the NASA careers needed to meet the vision. Whitlow talked with students about our destiny as explorers, NASAs stepping stone approach to exploring Earth, the Moon, Mars and beyond, how space impacts our lives, and how people and machines rely on each other in space.
Are the strategic stars aligned for your corporate brand?
Hatch, M J; Schultz, M
2001-02-01
In recent years, companies have increasingly seen the benefits of creating a corporate brand. Rather than spend marketing dollars on branding individual products, giants like Disney and Microsoft promote a single umbrella image that casts one glow over all their products. A company must align three interdependent elements--call them strategic stars--to create a strong corporate brand: vision, culture, and image. Aligning the stars takes concentrated managerial skill and will, the authors say, because each element is driven by a different constituency: management, employees, or stakeholders. To effectively build a corporate brand, executives must identify where their strategic stars fall out of line. The authors offer a series of diagnostic questions designed to reveal misalignments in corporate vision, culture, and image. The first set of questions looks for gaps between vision and culture; for example, when management establishes a vision that is too ambitious for the organization to implement. The second set addresses culture and image, uncovering possible gaps between the attitudes of employees and the perceptions of the outside world. The last set of questions explores the vision-image gap--is management taking the company in a direction that its stake-holders support? The authors discuss the benefits of a corporate brand, such as reducing marketing costs and building a sense of community among customers. But they also point to cases in which a corporate brand doesn't make sense--for instance, if you are a product incubator, if you've recently experienced M&A activity, or if you are expecting fallout from risky ventures.
Gesture-Controlled Interfaces for Self-Service Machines
NASA Technical Reports Server (NTRS)
Cohen, Charles J.; Beach, Glenn
2006-01-01
Gesture-controlled interfaces are software- driven systems that facilitate device control by translating visual hand and body signals into commands. Such interfaces could be especially attractive for controlling self-service machines (SSMs) for example, public information kiosks, ticket dispensers, gasoline pumps, and automated teller machines (see figure). A gesture-controlled interface would include a vision subsystem comprising one or more charge-coupled-device video cameras (at least two would be needed to acquire three-dimensional images of gestures). The output of the vision system would be processed by a pure software gesture-recognition subsystem. Then a translator subsystem would convert a sequence of recognized gestures into commands for the SSM to be controlled; these could include, for example, a command to display requested information, change control settings, or actuate a ticket- or cash-dispensing mechanism. Depending on the design and operational requirements of the SSM to be controlled, the gesture-controlled interface could be designed to respond to specific static gestures, dynamic gestures, or both. Static and dynamic gestures can include stationary or moving hand signals, arm poses or motions, and/or whole-body postures or motions. Static gestures would be recognized on the basis of their shapes; dynamic gestures would be recognized on the basis of both their shapes and their motions. Because dynamic gestures include temporal as well as spatial content, this gesture- controlled interface can extract more information from dynamic than it can from static gestures.
Scalable Nearest Neighbor Algorithms for High Dimensional Data.
Muja, Marius; Lowe, David G
2014-11-01
For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weisbin, C.R.
1987-03-01
This document reviews research accomplishments achieved by the staff of the Center for Engineering Systems Advanced Research (CESAR) during the fiscal years 1984 through 1987. The manuscript also describes future CESAR objectives for the 1988-1991 planning horizon, and beyond. As much as possible, the basic research goals are derived from perceived Department of Energy (DOE) needs for increased safety, productivity, and competitiveness in the United States energy producing and consuming facilities. Research areas covered include the HERMIES-II Robot, autonomous robot navigation, hypercube computers, machine vision, and manipulators.
[On machines and instruments (II): the world in the eye of the work of E. T. A. Hoffmann].
Montiel, L
2008-01-01
Continuing with the subject of the previous work, this article considers the whole series of problems connected to the question of vision provoked by the mere existence of the body of the automaton. The eyes of the android and, above all, the reactions aroused by looking at these human-shaped machines are the object of Hoffman's reflections, from a viewpoint apparently firmly set within the Goethean concept of the
Your Child's Development: 2 Months
... new skills your baby may have this month: Communication and Language Skills develops more distinct cries to ... Baby's Hearing, Vision, and Other Senses: 2 Months Communication and Your 1- to 3-Month-Old Feeding ...
The use of multimedia and programmed teaching machines for remote sensing education
NASA Technical Reports Server (NTRS)
Ulliman, J. J.
1980-01-01
The advantages, limitations, and uses of various audio visual equipments and techniques used in various universities for individualized and group instruction in the interpretation and classification of remotely sensed data are considered as well as systems for programmed and computer-assisted instruction.