Sample records for machine vision technology

  1. Machine vision 1992-1996: technology program to promote research and its utilization in industry

    NASA Astrophysics Data System (ADS)

    Soini, Antti J.

    1994-10-01

    Machine vision technology has got a strong interest in Finnish research organizations, which is resulting in many innovative products to industry. Despite this end users were very skeptical towards machine vision and its robustness for harsh industrial environments. Therefore Technology Development Centre, TEKES, who funds technology related research and development projects in universities and individual companies, decided to start a national technology program, Machine Vision 1992 - 1996. Led by industry the program boosts research in machine vision technology and seeks to put the research results to work in practical industrial applications. The emphasis is in nationally important, demanding applications. The program will create new industry and business for machine vision producers and encourage the process and manufacturing industry to take advantage of this new technology. So far 60 companies and all major universities and research centers are working on our forty different projects. The key themes that we have are process control, robot vision and quality control.

  2. Machine Vision Giving Eyes to Robots. Resources in Technology.

    ERIC Educational Resources Information Center

    Technology Teacher, 1990

    1990-01-01

    This module introduces machine vision, which can be used for inspection, robot guidance and part sorting. The future for machine vision will include new technology and will bring vision systems closer to the ultimate vision processor, the human eye. Includes a student quiz, outcomes, and activities. (JOW)

  3. Color line-scan technology in industrial applications

    NASA Astrophysics Data System (ADS)

    Lemstrom, Guy F.

    1995-10-01

    Color machine vision opens new possibilities for industrial on-line quality control applications. With color machine vision it's possible to detect different colors and shades, make color separation, spectroscopic applications and at the same time do measurements in the same way as with gray scale technology. These can be geometrical measurements such as dimensions, shape, texture etc. By combining these technologies in a color line scan camera, it brings the machine vision to new dimensions of realizing new applications and new areas in the machine vision business. Quality and process control requirements in the industry get more demanding every day. Color machine vision can be the solution for many simple tasks that haven't been realized with gray scale technology. The lack of detecting or measuring colors has been one reason why machine vision has not been used in quality control as much as it could have been. Color machine vision has shown a growing enthusiasm in the industrial machine vision applications. Potential areas of the industry include food, wood, mining and minerals, printing, paper, glass, plastic, recycling etc. Tasks are from simple measuring to total process and quality control. The color machine vision is not only for measuring colors. It can also be for contrast enhancement, object detection, background removing, structure detection and measuring. Color or spectral separation can be used in many different ways for working out machine vision application than before. It's only a question of how to use the benefits of having two or more data per measured pixel, instead of having only one as in case with traditional gray scale technology. There are plenty of potential applications already today that can be realized with color vision and it's going to give more performance to many traditional gray scale applications in the near future. But the most important feature is that color machine vision offers a new way of working out applications, where machine vision hasn't been applied before.

  4. The Employment Effects of High-Technology: A Case Study of Machine Vision. Research Report No. 86-19.

    ERIC Educational Resources Information Center

    Chen, Kan; Stafford, Frank P.

    A case study of machine vision was conducted to identify and analyze the employment effects of high technology in general. (Machine vision is the automatic acquisition and analysis of an image to obtain desired information for use in controlling an industrial activity, such as the visual sensor system that gives eyes to a robot.) Machine vision as…

  5. Industrial Inspection with Open Eyes: Advance with Machine Vision Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Zheng; Ukida, H.; Niel, Kurt

    Machine vision systems have evolved significantly with the technology advances to tackle the challenges from modern manufacturing industry. A wide range of industrial inspection applications for quality control are benefiting from visual information captured by different types of cameras variously configured in a machine vision system. This chapter screens the state of the art in machine vision technologies in the light of hardware, software tools, and major algorithm advances for industrial inspection. The inspection beyond visual spectrum offers a significant complementary to the visual inspection. The combination with multiple technologies makes it possible for the inspection to achieve a bettermore » performance and efficiency in varied applications. The diversity of the applications demonstrates the great potential of machine vision systems for industry.« less

  6. Ethical, environmental and social issues for machine vision in manufacturing industry

    NASA Astrophysics Data System (ADS)

    Batchelor, Bruce G.; Whelan, Paul F.

    1995-10-01

    Some of the ethical, environmental and social issues relating to the design and use of machine vision systems in manufacturing industry are highlighted. The authors' aim is to emphasize some of the more important issues, and raise general awareness of the need to consider the potential advantages and hazards of machine vision technology. However, in a short article like this, it is impossible to cover the subject comprehensively. This paper should therefore be seen as a discussion document, which it is hoped will provoke more detailed consideration of these very important issues. It follows from an article presented at last year's workshop. Five major topics are discussed: (1) The impact of machine vision systems on the environment; (2) The implications of machine vision for product and factory safety, the health and well-being of employees; (3) The importance of intellectual integrity in a field requiring a careful balance of advanced ideas and technologies; (4) Commercial and managerial integrity; and (5) The impact of machine visions technology on employment prospects, particularly for people with low skill levels.

  7. Machine vision for digital microfluidics

    NASA Astrophysics Data System (ADS)

    Shin, Yong-Jun; Lee, Jeong-Bong

    2010-01-01

    Machine vision is widely used in an industrial environment today. It can perform various tasks, such as inspecting and controlling production processes, that may require humanlike intelligence. The importance of imaging technology for biological research or medical diagnosis is greater than ever. For example, fluorescent reporter imaging enables scientists to study the dynamics of gene networks with high spatial and temporal resolution. Such high-throughput imaging is increasingly demanding the use of machine vision for real-time analysis and control. Digital microfluidics is a relatively new technology with expectations of becoming a true lab-on-a-chip platform. Utilizing digital microfluidics, only small amounts of biological samples are required and the experimental procedures can be automatically controlled. There is a strong need for the development of a digital microfluidics system integrated with machine vision for innovative biological research today. In this paper, we show how machine vision can be applied to digital microfluidics by demonstrating two applications: machine vision-based measurement of the kinetics of biomolecular interactions and machine vision-based droplet motion control. It is expected that digital microfluidics-based machine vision system will add intelligence and automation to high-throughput biological imaging in the future.

  8. Binary pressure-sensitive paint measurements using miniaturised, colour, machine vision cameras

    NASA Astrophysics Data System (ADS)

    Quinn, Mark Kenneth

    2018-05-01

    Recent advances in machine vision technology and capability have led to machine vision cameras becoming applicable for scientific imaging. This study aims to demonstrate the applicability of machine vision colour cameras for the measurement of dual-component pressure-sensitive paint (PSP). The presence of a second luminophore component in the PSP mixture significantly reduces its inherent temperature sensitivity, increasing its applicability at low speeds. All of the devices tested are smaller than the cooled CCD cameras traditionally used and most are of significantly lower cost, thereby increasing the accessibility of such technology and techniques. Comparisons between three machine vision cameras, a three CCD camera, and a commercially available specialist PSP camera are made on a range of parameters, and a detailed PSP calibration is conducted in a static calibration chamber. The findings demonstrate that colour machine vision cameras can be used for quantitative, dual-component, pressure measurements. These results give rise to the possibility of performing on-board dual-component PSP measurements in wind tunnels or on real flight/road vehicles.

  9. Reflections on the Development of a Machine Vision Technology for the Forest Products

    Treesearch

    Richard W. Conners; D.Earl Kline; Philip A. Araman; Robert L. Brisbon

    1992-01-01

    The authors have approximately 25 years experience in developing machine vision technology for the forest products industry. Based on this experience this paper will attempt to realistically predict what the future holds for this technology. In particular, this paper will attempt to describe some of the benefits this technology will offer, describe how the technology...

  10. Color machine vision in industrial process control: case limestone mine

    NASA Astrophysics Data System (ADS)

    Paernaenen, Pekka H. T.; Lemstrom, Guy F.; Koskinen, Seppo

    1994-11-01

    An optical sorter technology has been developed to improve profitability of a mine by using color line scan machine vision technology. The new technology adapted longers the expected life time of the limestone mine and improves its efficiency. Also the project has proved that color line scan technology of today can successfully be applied to industrial use in harsh environments.

  11. Miniaturisation of Pressure-Sensitive Paint Measurement Systems Using Low-Cost, Miniaturised Machine Vision Cameras.

    PubMed

    Quinn, Mark Kenneth; Spinosa, Emanuele; Roberts, David A

    2017-07-25

    Measurements of pressure-sensitive paint (PSP) have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access.

  12. Miniaturisation of Pressure-Sensitive Paint Measurement Systems Using Low-Cost, Miniaturised Machine Vision Cameras

    PubMed Central

    Spinosa, Emanuele; Roberts, David A.

    2017-01-01

    Measurements of pressure-sensitive paint (PSP) have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access. PMID:28757553

  13. Giving Machines the Vision

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Amherst Systems manufactures foveal machine vision technology and systems commercially available to end-users and system integrators. This technology was initially developed under NASA contracts NAS9-19335 (Johnson Space Center) and NAS1-20841 (Langley Research Center). This technology is currently being delivered to university research facilities and military sites. More information may be found in www.amherst.com.

  14. Machine Vision Technology for the Forest Products Industry

    Treesearch

    Richard W. Conners; D.Earl Kline; Philip A. Araman; Thomas T. Drayer

    1997-01-01

    From forest to finished product, wood is moved from one processing stage to the next, subject to the decisions of individuals along the way. While this process has worked for hundreds of years, the technology exists today to provide more complete information to the decision makers. Virginia Tech has developed this technology, creating a machine vision prototype for...

  15. Color line scan camera technology and machine vision: requirements to consider

    NASA Astrophysics Data System (ADS)

    Paernaenen, Pekka H. T.

    1997-08-01

    Color machine vision has shown a dynamic uptrend in use within the past few years as the introduction of new cameras and scanner technologies itself underscores. In the future, the movement from monochrome imaging to color will hasten, as machine vision system users demand more knowledge about their product stream. As color has come to the machine vision, certain requirements for the equipment used to digitize color images are needed. Color machine vision needs not only a good color separation but also a high dynamic range and a good linear response from the camera used. Good dynamic range and linear response is necessary for color machine vision. The importance of these features becomes even more important when the image is converted to another color space. There is always lost some information when converting integer data to another form. Traditionally the color image processing has been much slower technique than the gray level image processing due to the three times greater data amount per image. The same has applied for the three times more memory needed. The advancements in computers, memory and processing units has made it possible to handle even large color images today cost efficiently. In some cases he image analysis in color images can in fact even be easier and faster than with a similar gray level image because of more information per pixel. Color machine vision sets new requirements for lighting, too. High intensity and white color light is required in order to acquire good images for further image processing or analysis. New development in lighting technology is bringing eventually solutions for color imaging.

  16. Recent advances in the development and transfer of machine vision technologies for space

    NASA Technical Reports Server (NTRS)

    Defigueiredo, Rui J. P.; Pendleton, Thomas

    1991-01-01

    Recent work concerned with real-time machine vision is briefly reviewed. This work includes methodologies and techniques for optimal illumination, shape-from-shading of general (non-Lambertian) 3D surfaces, laser vision devices and technology, high level vision, sensor fusion, real-time computing, artificial neural network design and use, and motion estimation. Two new methods that are currently being developed for object recognition in clutter and for 3D attitude tracking based on line correspondence are discussed.

  17. Creative Technology and Rap

    ERIC Educational Resources Information Center

    Ch'ien, Evelyn

    2011-01-01

    This paper describes how a linguistic form, rap, can evolve in tandem with technological advances and manifest human-machine creativity. Rather than assuming that the interplay between machines and technology makes humans robotic or machine-like, the paper explores how the pressure of executing artistic visions using technology can drive…

  18. A self-learning camera for the validation of highly variable and pseudorandom patterns

    NASA Astrophysics Data System (ADS)

    Kelley, Michael

    2004-05-01

    Reliable and productive manufacturing operations have depended on people to quickly detect and solve problems whenever they appear. Over the last 20 years, more and more manufacturing operations have embraced machine vision systems to increase productivity, reliability and cost-effectiveness, including reducing the number of human operators required. Although machine vision technology has long been capable of solving simple problems, it has still not been broadly implemented. The reason is that until now, no machine vision system has been designed to meet the unique demands of complicated pattern recognition. The ZiCAM family was specifically developed to be the first practical hardware to meet these needs. To be able to address non-traditional applications, the machine vision industry must include smart camera technology that meets its users" demands for lower costs, better performance and the ability to address applications of irregular lighting, patterns and color. The next-generation smart cameras will need to evolve as a fundamentally different kind of sensor, with new technology that behaves like a human but performs like a computer. Neural network based systems, coupled with self-taught, n-space, non-linear modeling, promises to be the enabler of the next generation of machine vision equipment. Image processing technology is now available that enables a system to match an operator"s subjectivity. A Zero-Instruction-Set-Computer (ZISC) powered smart camera allows high-speed fuzzy-logic processing, without the need for computer programming. This can address applications of validating highly variable and pseudo-random patterns. A hardware-based implementation of a neural network, Zero-Instruction-Set-Computer, enables a vision system to "think" and "inspect" like a human, with the speed and reliability of a machine.

  19. Liquid lens: advances in adaptive optics

    NASA Astrophysics Data System (ADS)

    Casey, Shawn Patrick

    2010-12-01

    'Liquid lens' technologies promise significant advancements in machine vision and optical communications systems. Adaptations for machine vision, human vision correction, and optical communications are used to exemplify the versatile nature of this technology. Utilization of liquid lens elements allows the cost effective implementation of optical velocity measurement. The project consists of a custom image processor, camera, and interface. The images are passed into customized pattern recognition and optical character recognition algorithms. A single camera would be used for both speed detection and object recognition.

  20. Reinforcement learning in computer vision

    NASA Astrophysics Data System (ADS)

    Bernstein, A. V.; Burnaev, E. V.

    2018-04-01

    Nowadays, machine learning has become one of the basic technologies used in solving various computer vision tasks such as feature detection, image segmentation, object recognition and tracking. In many applications, various complex systems such as robots are equipped with visual sensors from which they learn state of surrounding environment by solving corresponding computer vision tasks. Solutions of these tasks are used for making decisions about possible future actions. It is not surprising that when solving computer vision tasks we should take into account special aspects of their subsequent application in model-based predictive control. Reinforcement learning is one of modern machine learning technologies in which learning is carried out through interaction with the environment. In recent years, Reinforcement learning has been used both for solving such applied tasks as processing and analysis of visual information, and for solving specific computer vision problems such as filtering, extracting image features, localizing objects in scenes, and many others. The paper describes shortly the Reinforcement learning technology and its use for solving computer vision problems.

  1. Database Integrity Monitoring for Synthetic Vision Systems Using Machine Vision and SHADE

    NASA Technical Reports Server (NTRS)

    Cooper, Eric G.; Young, Steven D.

    2005-01-01

    In an effort to increase situational awareness, the aviation industry is investigating technologies that allow pilots to visualize what is outside of the aircraft during periods of low-visibility. One of these technologies, referred to as Synthetic Vision Systems (SVS), provides the pilot with real-time computer-generated images of obstacles, terrain features, runways, and other aircraft regardless of weather conditions. To help ensure the integrity of such systems, methods of verifying the accuracy of synthetically-derived display elements using onboard remote sensing technologies are under investigation. One such method is based on a shadow detection and extraction (SHADE) algorithm that transforms computer-generated digital elevation data into a reference domain that enables direct comparison with radar measurements. This paper describes machine vision techniques for making this comparison and discusses preliminary results from application to actual flight data.

  2. Artificial Intelligence/Robotics Applications to Navy Aircraft Maintenance.

    DTIC Science & Technology

    1984-06-01

    other automatic machinery such as presses, molding machines , and numerically-controlled machine tools, just as people do. A-36...Robotics Technologies 3 B. Relevant AI Technologies 4 1. Expert Systems 4 2. Automatic Planning 4 3. Natural Language 5 4. Machine Vision...building machines that imitate human behavior. Artificial intelligence is concerned with the functions of the brain, whereas robotics include, in

  3. Lumber Scanning System for Surface Defect Detection

    Treesearch

    D. Earl Kline; Y. Jason Hou; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman

    1992-01-01

    This paper describes research aimed at developing a machine vision technology to drive automated processes in the hardwood forest products manufacturing industry. An industrial-scale machine vision system has been designed to scan variable-size hardwood lumber for detecting important features that influence the grade and value of lumber such as knots, holes, wane,...

  4. Machine Vision-Based Measurement Systems for Fruit and Vegetable Quality Control in Postharvest.

    PubMed

    Blasco, José; Munera, Sandra; Aleixos, Nuria; Cubero, Sergio; Molto, Enrique

    Individual items of any agricultural commodity are different from each other in terms of colour, shape or size. Furthermore, as they are living thing, they change their quality attributes over time, thereby making the development of accurate automatic inspection machines a challenging task. Machine vision-based systems and new optical technologies make it feasible to create non-destructive control and monitoring tools for quality assessment to ensure adequate accomplishment of food standards. Such systems are much faster than any manual non-destructive examination of fruit and vegetable quality, thus allowing the whole production to be inspected with objective and repeatable criteria. Moreover, current technology makes it possible to inspect the fruit in spectral ranges beyond the sensibility of the human eye, for instance in the ultraviolet and near-infrared regions. Machine vision-based applications require the use of multiple technologies and knowledge, ranging from those related to image acquisition (illumination, cameras, etc.) to the development of algorithms for spectral image analysis. Machine vision-based systems for inspecting fruit and vegetables are targeted towards different purposes, from in-line sorting into commercial categories to the detection of contaminants or the distribution of specific chemical compounds on the product's surface. This chapter summarises the current state of the art in these techniques, starting with systems based on colour images for the inspection of conventional colour, shape or external defects and then goes on to consider recent developments in spectral image analysis for internal quality assessment or contaminant detection.

  5. Machine vision methods for use in grain variety discrimination and quality analysis

    NASA Astrophysics Data System (ADS)

    Winter, Philip W.; Sokhansanj, Shahab; Wood, Hugh C.

    1996-12-01

    Decreasing cost of computer technology has made it feasible to incorporate machine vision technology into the agriculture industry. The biggest attraction to using a machine vision system is the computer's ability to be completely consistent and objective. One use is in the variety discrimination and quality inspection of grains. Algorithms have been developed using Fourier descriptors and neural networks for use in variety discrimination of barley seeds. RGB and morphology features have been used in the quality analysis of lentils, and probability distribution functions and L,a,b color values for borage dockage testing. These methods have been shown to be very accurate and have a high potential for agriculture. This paper presents the techniques used and results obtained from projects including: a lentil quality discriminator, a barley variety classifier, a borage dockage tester, a popcorn quality analyzer, and a pistachio nut grading system.

  6. Quality Control by Artificial Vision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lam, Edmond Y.; Gleason, Shaun Scott; Niel, Kurt S.

    2010-01-01

    Computational technology has fundamentally changed many aspects of our lives. One clear evidence is the development of artificial-vision systems, which have effectively automated many manual tasks ranging from quality inspection to quantitative assessment. In many cases, these machine-vision systems are even preferred over manual ones due to their repeatability and high precision. Such advantages come from significant research efforts in advancing sensor technology, illumination, computational hardware, and image-processing algorithms. Similar to the Special Section on Quality Control by Artificial Vision published two years ago in Volume 17, Issue 3 of the Journal of Electronic Imaging, the present one invited papersmore » relevant to fundamental technology improvements to foster quality control by artificial vision, and fine-tuned the technology for specific applications. We aim to balance both theoretical and applied work pertinent to this special section theme. Consequently, we have seven high-quality papers resulting from the stringent peer-reviewing process in place at the Journal of Electronic Imaging. Some of the papers contain extended treatment of the authors work presented at the SPIE Image Processing: Machine Vision Applications conference and the International Conference on Quality Control by Artificial Vision. On the broad application side, Liu et al. propose an unsupervised texture image segmentation scheme. Using a multilayer data condensation spectral clustering algorithm together with wavelet transform, they demonstrate the effectiveness of their approach on both texture and synthetic aperture radar images. A problem related to image segmentation is image extraction. For this, O'Leary et al. investigate the theory of polynomial moments and show how these moments can be compared to classical filters. They also show how to use the discrete polynomial-basis functions for the extraction of 3-D embossed digits, demonstrating superiority over Fourier-basis functions for this task. Image registration is another important task for machine vision. Bingham and Arrowood investigate the implementation and results in applying Fourier phase matching for projection registration, with a particular focus on nondestructive testing using computed tomography. Readers interested in enriching their arsenal of image-processing algorithms for machine-vision tasks should find these papers enriching. Meanwhile, we have four papers dealing with more specific machine-vision tasks. The first one, Yahiaoui et al., is quantitative in nature, using machine vision for real-time passenger counting. Occulsion is a common problem in counting objects and people, and they circumvent this issue with a dense stereovision system, achieving 97 to 99% accuracy in their tests. On the other hand, the second paper by Oswald-Tranta et al. focuses on thermographic crack detection. An infrared camera is used to detect inhomogeneities, which may indicate surface cracks. They describe the various steps in developing fully automated testing equipment aimed at a high throughput. Another paper describing an inspection system is Molleda et al., which handles flatness inspection of rolled products. They employ optical-laser triangulation and 3-D surface reconstruction for this task, showing how these can be achieved in real time. Last but not least, Presles et al. propose a way to monitor the particle-size distribution of batch crystallization processes. This is achieved through a new in situ imaging probe and image-analysis methods. While it is unlikely any reader may be working on these four specific problems at the same time, we are confident that readers will find these papers inspiring and potentially helpful to their own machine-vision system developments.« less

  7. Scanning System -- Technology Worth a Look

    Treesearch

    Philip A. Araman; Daniel L. Schmoldt; Richard W. Conners; D. Earl Kline

    1995-01-01

    In an effort to help automate the inspection for lumber defects, optical scanning systems are emerging as an alternative to the human eye. Although still in its infancy, scanning technology is being explored by machine companies and universities. This article was excerpted from "Machine Vision Systems for Grading and Processing Hardwood Lumber," by Philip...

  8. Current Technologies and its Trends of Machine Vision in the Field of Security and Disaster Prevention

    NASA Astrophysics Data System (ADS)

    Hashimoto, Manabu; Fujino, Yozo

    Image sensing technologies are expected as useful and effective way to suppress damages by criminals and disasters in highly safe and relieved society. In this paper, we describe current important subjects, required functions, technical trends, and a couple of real examples of developed system. As for the video surveillance, recognition of human trajectory and human behavior using image processing techniques are introduced with real examples about the violence detection for elevators. In the field of facility monitoring technologies as civil engineering, useful machine vision applications such as automatic detection of concrete cracks on walls of a building or recognition of crowded people on bridge for effective guidance in emergency are shown.

  9. SAD-Based Stereo Vision Machine on a System-on-Programmable-Chip (SoPC)

    PubMed Central

    Zhang, Xiang; Chen, Zhangwei

    2013-01-01

    This paper, proposes a novel solution for a stereo vision machine based on the System-on-Programmable-Chip (SoPC) architecture. The SOPC technology provides great convenience for accessing many hardware devices such as DDRII, SSRAM, Flash, etc., by IP reuse. The system hardware is implemented in a single FPGA chip involving a 32-bit Nios II microprocessor, which is a configurable soft IP core in charge of managing the image buffer and users' configuration data. The Sum of Absolute Differences (SAD) algorithm is used for dense disparity map computation. The circuits of the algorithmic module are modeled by the Matlab-based DSP Builder. With a set of configuration interfaces, the machine can process many different sizes of stereo pair images. The maximum image size is up to 512 K pixels. This machine is designed to focus on real time stereo vision applications. The stereo vision machine offers good performance and high efficiency in real time. Considering a hardware FPGA clock of 90 MHz, 23 frames of 640 × 480 disparity maps can be obtained in one second with 5 × 5 matching window and maximum 64 disparity pixels. PMID:23459385

  10. Characteristics of the Arcing Plasma Formation Effect in Spark-Assisted Chemical Engraving of Glass, Based on Machine Vision

    PubMed Central

    Wu, Dung-Sheng

    2018-01-01

    Spark-assisted chemical engraving (SACE) is a non-traditional machining technology that is used to machine electrically non-conducting materials including glass, ceramics, and quartz. The processing accuracy, machining efficiency, and reproducibility are the key factors in the SACE process. In the present study, a machine vision method is applied to monitor and estimate the status of a SACE-drilled hole in quartz glass. During the machining of quartz glass, the spring-fed tool electrode was pre-pressured on the quartz glass surface to feed the electrode that was in contact with the machining surface of the quartz glass. In situ image acquisition and analysis of the SACE drilling processes were used to analyze the captured image of the state of the spark discharge at the tip and sidewall of the electrode. The results indicated an association between the accumulative size of the SACE-induced spark area and deepness of the hole. The results indicated that the evaluated depths of the SACE-machined holes were a proportional function of the accumulative spark size with a high degree of correlation. The study proposes an innovative computer vision-based method to estimate the deepness and status of SACE-drilled holes in real time. PMID:29565303

  11. Characteristics of the Arcing Plasma Formation Effect in Spark-Assisted Chemical Engraving of Glass, Based on Machine Vision.

    PubMed

    Ho, Chao-Ching; Wu, Dung-Sheng

    2018-03-22

    Spark-assisted chemical engraving (SACE) is a non-traditional machining technology that is used to machine electrically non-conducting materials including glass, ceramics, and quartz. The processing accuracy, machining efficiency, and reproducibility are the key factors in the SACE process. In the present study, a machine vision method is applied to monitor and estimate the status of a SACE-drilled hole in quartz glass. During the machining of quartz glass, the spring-fed tool electrode was pre-pressured on the quartz glass surface to feed the electrode that was in contact with the machining surface of the quartz glass. In situ image acquisition and analysis of the SACE drilling processes were used to analyze the captured image of the state of the spark discharge at the tip and sidewall of the electrode. The results indicated an association between the accumulative size of the SACE-induced spark area and deepness of the hole. The results indicated that the evaluated depths of the SACE-machined holes were a proportional function of the accumulative spark size with a high degree of correlation. The study proposes an innovative computer vision-based method to estimate the deepness and status of SACE-drilled holes in real time.

  12. LED light design method for high contrast and uniform illumination imaging in machine vision.

    PubMed

    Wu, Xiaojun; Gao, Guangming

    2018-03-01

    In machine vision, illumination is very critical to determine the complexity of the inspection algorithms. Proper lights can obtain clear and sharp images with the highest contrast and low noise between the interested object and the background, which is conducive to the target being located, measured, or inspected. Contrary to the empirically based trial-and-error convention to select the off-the-shelf LED light in machine vision, an optimization algorithm for LED light design is proposed in this paper. It is composed of the contrast optimization modeling and the uniform illumination technology for non-normal incidence (UINI). The contrast optimization model is built based on the surface reflection characteristics, e.g., the roughness, the reflective index, and light direction, etc., to maximize the contrast between the features of interest and the background. The UINI can keep the uniformity of the optimized lighting by the contrast optimization model. The simulation and experimental results demonstrate that the optimization algorithm is effective and suitable to produce images with the highest contrast and uniformity, which is very inspirational to the design of LED illumination systems in machine vision.

  13. Potential application of machine vision technology to saffron (Crocus sativus L.) quality characterization.

    PubMed

    Kiani, Sajad; Minaei, Saeid

    2016-12-01

    Saffron quality characterization is an important issue in the food industry and of interest to the consumers. This paper proposes an expert system based on the application of machine vision technology for characterization of saffron and shows how it can be employed in practical usage. There is a correlation between saffron color and its geographic location of production and some chemical attributes which could be properly used for characterization of saffron quality and freshness. This may be accomplished by employing image processing techniques coupled with multivariate data analysis for quantification of saffron properties. Expert algorithms can be made available for prediction of saffron characteristics such as color as well as for product classification. Copyright © 2016. Published by Elsevier Ltd.

  14. Neo-Symbiosis: The Next Stage in the Evolution of Human Information Interaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Griffith, Douglas; Greitzer, Frank L.

    We re-address the vision of human-computer symbiosis expressed by J. C. R. Licklider nearly a half-century ago, when he wrote: “The hope is that in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.” (Licklider, 1960). Unfortunately, little progress was made toward this vision over four decades following Licklider’s challenge, despite significant advancements in the fields of human factors and computer science. Licklider’s vision wasmore » largely forgotten. However, recent advances in information science and technology, psychology, and neuroscience have rekindled the potential of making the Licklider’s vision a reality. This paper provides a historical context for and updates the vision, and it argues that such a vision is needed as a unifying framework for advancing IS&T.« less

  15. Technology and Web-Based Support

    ERIC Educational Resources Information Center

    Smith, Carol

    2008-01-01

    Many types of technology support caregiving: (1) Assistive devices include medicine dispensers, feeding and bathing machines, clothing with polypropylene fibers that stimulate muscles, intelligent ambulatory walkers for those with both vision and mobility impairment, medication reminders, and safety alarms; (2) Telecare devices ranging from…

  16. The use of morphological characteristics and texture analysis in the identification of tissue composition in prostatic neoplasia.

    PubMed

    Diamond, James; Anderson, Neil H; Bartels, Peter H; Montironi, Rodolfo; Hamilton, Peter W

    2004-09-01

    Quantitative examination of prostate histology offers clues in the diagnostic classification of lesions and in the prediction of response to treatment and prognosis. To facilitate the collection of quantitative data, the development of machine vision systems is necessary. This study explored the use of imaging for identifying tissue abnormalities in prostate histology. Medium-power histological scenes were recorded from whole-mount radical prostatectomy sections at x 40 objective magnification and assessed by a pathologist as exhibiting stroma, normal tissue (nonneoplastic epithelial component), or prostatic carcinoma (PCa). A machine vision system was developed that divided the scenes into subregions of 100 x 100 pixels and subjected each to image-processing techniques. Analysis of morphological characteristics allowed the identification of normal tissue. Analysis of image texture demonstrated that Haralick feature 4 was the most suitable for discriminating stroma from PCa. Using these morphological and texture measurements, it was possible to define a classification scheme for each subregion. The machine vision system is designed to integrate these classification rules and generate digital maps of tissue composition from the classification of subregions; 79.3% of subregions were correctly classified. Established classification rates have demonstrated the validity of the methodology on small scenes; a logical extension was to apply the methodology to whole slide images via scanning technology. The machine vision system is capable of classifying these images. The machine vision system developed in this project facilitates the exploration of morphological and texture characteristics in quantifying tissue composition. It also illustrates the potential of quantitative methods to provide highly discriminatory information in the automated identification of prostatic lesions using computer vision.

  17. On-line welding quality inspection system for steel pipe based on machine vision

    NASA Astrophysics Data System (ADS)

    Yang, Yang

    2017-05-01

    In recent years, high frequency welding has been widely used in production because of its advantages of simplicity, reliability and high quality. In the production process, how to effectively control the weld penetration welding, ensure full penetration, weld uniform, so as to ensure the welding quality is to solve the problem of the present stage, it is an important research field in the field of welding technology. In this paper, based on the study of some methods of welding inspection, a set of on-line welding quality inspection system based on machine vision is designed.

  18. Rubber hose surface defect detection system based on machine vision

    NASA Astrophysics Data System (ADS)

    Meng, Fanwu; Ren, Jingrui; Wang, Qi; Zhang, Teng

    2018-01-01

    As an important part of connecting engine, air filter, engine, cooling system and automobile air-conditioning system, automotive hose is widely used in automobile. Therefore, the determination of the surface quality of the hose is particularly important. This research is based on machine vision technology, using HALCON algorithm for the processing of the hose image, and identifying the surface defects of the hose. In order to improve the detection accuracy of visual system, this paper proposes a method to classify the defects to reduce misjudegment. The experimental results show that the method can detect surface defects accurately.

  19. Integrated Imaging and Vision Techniques for Industrial Inspection: A Special Issue on Machine Vision and Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Zheng; Ukida, H.; Ramuhalli, Pradeep

    2010-06-05

    Imaging- and vision-based techniques play an important role in industrial inspection. The sophistication of the techniques assures high- quality performance of the manufacturing process through precise positioning, online monitoring, and real-time classification. Advanced systems incorporating multiple imaging and/or vision modalities provide robust solutions to complex situations and problems in industrial applications. A diverse range of industries, including aerospace, automotive, electronics, pharmaceutical, biomedical, semiconductor, and food/beverage, etc., have benefited from recent advances in multi-modal imaging, data fusion, and computer vision technologies. Many of the open problems in this context are in the general area of image analysis methodologies (preferably in anmore » automated fashion). This editorial article introduces a special issue of this journal highlighting recent advances and demonstrating the successful applications of integrated imaging and vision technologies in industrial inspection.« less

  20. A low-cost machine vision system for the recognition and sorting of small parts

    NASA Astrophysics Data System (ADS)

    Barea, Gustavo; Surgenor, Brian W.; Chauhan, Vedang; Joshi, Keyur D.

    2018-04-01

    An automated machine vision-based system for the recognition and sorting of small parts was designed, assembled and tested. The system was developed to address a need to expose engineering students to the issues of machine vision and assembly automation technology, with readily available and relatively low-cost hardware and software. This paper outlines the design of the system and presents experimental performance results. Three different styles of plastic gears, together with three different styles of defective gears, were used to test the system. A pattern matching tool was used for part classification. Nine experiments were conducted to demonstrate the effects of changing various hardware and software parameters, including: conveyor speed, gear feed rate, classification, and identification score thresholds. It was found that the system could achieve a maximum system accuracy of 95% at a feed rate of 60 parts/min, for a given set of parameter settings. Future work will be looking at the effect of lighting.

  1. Neural network expert system for X-ray analysis of welded joints

    NASA Astrophysics Data System (ADS)

    Kozlov, V. V.; Lapik, N. V.; Popova, N. V.

    2018-03-01

    The use of intelligent technologies for the automated analysis of product quality is one of the main trends in modern machine building. At the same time, rapid development in various spheres of human activity is experienced by methods associated with the use of artificial neural networks, as the basis for building automated intelligent diagnostic systems. Technologies of machine vision allow one to effectively detect the presence of certain regularities in the analyzed designation, including defects of welded joints according to radiography data.

  2. Technology for robotic surface inspection in space

    NASA Technical Reports Server (NTRS)

    Volpe, Richard; Balaram, J.

    1994-01-01

    This paper presents on-going research in robotic inspection of space platforms. Three main areas of investigation are discussed: machine vision inspection techniques, an integrated sensor end-effector, and an orbital environment laboratory simulation. Machine vision inspection utilizes automatic comparison of new and reference images to detect on-orbit induced damage such as micrometeorite impacts. The cameras and lighting used for this inspection are housed in a multisensor end-effector, which also contains a suite of sensors for detection of temperature, gas leaks, proximity, and forces. To fully test all of these sensors, a realistic space platform mock-up has been created, complete with visual, temperature, and gas anomalies. Further, changing orbital lighting conditions are effectively mimicked by a robotic solar simulator. In the paper, each of these technology components will be discussed, and experimental results are provided.

  3. In-process fault detection for textile fabric production: onloom imaging

    NASA Astrophysics Data System (ADS)

    Neumann, Florian; Holtermann, Timm; Schneider, Dorian; Kulczycki, Ashley; Gries, Thomas; Aach, Til

    2011-05-01

    Constant and traceable high fabric quality is of high importance both for technical and for high-quality conventional fabrics. Usually, quality inspection is carried out by trained personal, whose detection rate and maximum period of concentration are limited. Low resolution automated fabric inspection machines using texture analysis were developed. Since 2003, systems for the in-process inspection on weaving machines ("onloom") are commercially available. With these defects can be detected, but not measured quantitative precisely. Most systems are also prone to inevitable machine vibrations. Feedback loops for fault prevention are not established. Technology has evolved since 2003: Camera and computer prices dropped, resolutions were enhanced, recording speeds increased. These are the preconditions for real-time processing of high-resolution images. So far, these new technological achievements are not used in textile fabric production. For efficient use, a measurement system must be integrated into the weaving process; new algorithms for defect detection and measurement must be developed. The goal of the joint project is the development of a modern machine vision system for nondestructive onloom fabric inspection. The system consists of a vibration-resistant machine integration, a high-resolution machine vision system, and new, reliable, and robust algorithms with quality database for defect documentation. The system is meant to detect, measure, and classify at least 80 % of economically relevant defects. Concepts for feedback loops into the weaving process will be pointed out.

  4. Development of a Computer Vision Technology for the Forest Products Manufacturing Industry

    Treesearch

    D. Earl Kline; Richard Conners; Philip A. Araman

    1992-01-01

    The goal of this research is to create an automated processing/grading system for hardwood lumber that will be of use to the forest products industry. The objective of creating a full scale machine vision prototype for inspecting hardwood lumber will become a reality in calendar year 1992. Space for the full scale prototype has been created at the Brooks Forest...

  5. Why the Future Doesn't Come from Machines: Unfounded Prophecies and the Design of Naturoids

    ERIC Educational Resources Information Center

    Negrotti, Massimo

    2008-01-01

    Technological imagination and actual technological achievements have always been two very different things. Sudden and unpredictable events always seem to intervene between our visions regarding possible futures and the subsequent concrete realizations. Thus, our ideas and projects are continually being redirected. In the field of…

  6. Development of a body motion interactive system with a weight voting mechanism and computer vision technology

    NASA Astrophysics Data System (ADS)

    Lin, Chern-Sheng; Chen, Chia-Tse; Shei, Hung-Jung; Lay, Yun-Long; Chiu, Chuang-Chien

    2012-09-01

    This study develops a body motion interactive system with computer vision technology. This application combines interactive games, art performing, and exercise training system. Multiple image processing and computer vision technologies are used in this study. The system can calculate the characteristics of an object color, and then perform color segmentation. When there is a wrong action judgment, the system will avoid the error with a weight voting mechanism, which can set the condition score and weight value for the action judgment, and choose the best action judgment from the weight voting mechanism. Finally, this study estimated the reliability of the system in order to make improvements. The results showed that, this method has good effect on accuracy and stability during operations of the human-machine interface of the sports training system.

  7. Nondestructive and rapid detection of potato black heart based on machine vision technology

    NASA Astrophysics Data System (ADS)

    Tian, Fang; Peng, Yankun; Wei, Wensong

    2016-05-01

    Potatoes are one of the major food crops in the world. Potato black heart is a kind of defect that the surface is intact while the tissues in skin become black. This kind of potato has lost the edibleness, but it's difficult to be detected with conventional methods. A nondestructive detection system based on the machine vision technology was proposed in this study to distinguish the normal and black heart of potatoes according to the different transmittance of them. The detection system was equipped with a monochrome CCD camera, LED light sources for transmitted illumination and a computer. Firstly, the transmission images of normal and black heart potatoes were taken by the detection system. Then the images were processed by algorithm written with VC++. As the transmitted light intensity was influenced by the radial dimension of the potato samples, the relationship between the grayscale value and the potato radial dimension was acquired by analyzing the grayscale value changing rule of the transmission image. Then proper judging condition was confirmed to distinguish the normal and black heart of potatoes after image preprocessing. The results showed that the nondestructive system built coupled with the processing methods was accessible for the detection of potato black heart at a considerable accuracy rate. The transmission detection technique based on machine vision is nondestructive and feasible to realize the detection of potato black heart.

  8. Modern trends in industrial technology of production of optical polymeric components for night vision devices

    NASA Astrophysics Data System (ADS)

    Goev, A. I.; Knyazeva, N. A.; Potelov, V. V.; Senik, B. N.

    2005-06-01

    The present paper represents in detail the complex approach to creating industrial technology of production of polymeric optical components: information has been given on optical polymeric materials, automatic machines for injection moulding, the possibilities of the Moldflow system (the AB "Universal" company) used for mathematical simulation of the technological process of injection moulding and making the moulds.

  9. The Future System for Roughmill Optimization

    Treesearch

    Richard W. Conners; D.Earl Kline; Philip A. Araman; Thomas T. Drayer

    1997-01-01

    From forest to finished product, wood is moved from one processing stage to the next, subject to the decisions of individuals along the way. While this process has worked for hundreds of years, the technology exists today to provide more complete information to the decision makers. Virginia Tech has developed this technology, creating a machine vision prototype for...

  10. Humans and machines in space: The vision, the challenge, the payoff; AAS Goddard Memorial Symposium, 29th, Washington, DC, March 14-15, 1991

    NASA Astrophysics Data System (ADS)

    Johnson, Bradley; May, Gayle L.; Korn, Paula

    A recent symposium produced papers in the areas of solar system exploration, man machine interfaces, cybernetics, virtual reality, telerobotics, life support systems and the scientific and technology spinoff from the NASA space program. A number of papers also addressed the social and economic impacts of the space program. For individual titles, see A95-87468 through A95-87479.

  11. Machine vision systems using machine learning for industrial product inspection

    NASA Astrophysics Data System (ADS)

    Lu, Yi; Chen, Tie Q.; Chen, Jie; Zhang, Jian; Tisler, Anthony

    2002-02-01

    Machine vision inspection requires efficient processing time and accurate results. In this paper, we present a machine vision inspection architecture, SMV (Smart Machine Vision). SMV decomposes a machine vision inspection problem into two stages, Learning Inspection Features (LIF), and On-Line Inspection (OLI). The LIF is designed to learn visual inspection features from design data and/or from inspection products. During the OLI stage, the inspection system uses the knowledge learnt by the LIF component to inspect the visual features of products. In this paper we will present two machine vision inspection systems developed under the SMV architecture for two different types of products, Printed Circuit Board (PCB) and Vacuum Florescent Displaying (VFD) boards. In the VFD board inspection system, the LIF component learns inspection features from a VFD board and its displaying patterns. In the PCB board inspection system, the LIF learns the inspection features from the CAD file of a PCB board. In both systems, the LIF component also incorporates interactive learning to make the inspection system more powerful and efficient. The VFD system has been deployed successfully in three different manufacturing companies and the PCB inspection system is the process of being deployed in a manufacturing plant.

  12. Sensor fusion of phase measuring profilometry and stereo vision for three-dimensional inspection of electronic components assembled on printed circuit boards.

    PubMed

    Hong, Deokhwa; Lee, Hyunki; Kim, Min Young; Cho, Hyungsuck; Moon, Jeon Il

    2009-07-20

    Automatic optical inspection (AOI) for printed circuit board (PCB) assembly plays a very important role in modern electronics manufacturing industries. Well-developed inspection machines in each assembly process are required to ensure the manufacturing quality of the electronics products. However, generally almost all AOI machines are based on 2D image-analysis technology. In this paper, a 3D-measurement-method-based AOI system is proposed consisting of a phase shifting profilometer and a stereo vision system for assembled electronic components on a PCB after component mounting and the reflow process. In this system information from two visual systems is fused to extend the shape measurement range limited by 2pi phase ambiguity of the phase shifting profilometer, and finally to maintain fine measurement resolution and high accuracy of the phase shifting profilometer with the measurement range extended by the stereo vision. The main purpose is to overcome the low inspection reliability problem of 2D-based inspection machines by using 3D information of components. The 3D shape measurement results on PCB-mounted electronic components are shown and compared with results from contact and noncontact 3D measuring machines. Based on a series of experiments, the usefulness of the proposed sensor system and its fusion technique are discussed and analyzed in detail.

  13. Intelligent image processing for machine safety

    NASA Astrophysics Data System (ADS)

    Harvey, Dennis N.

    1994-10-01

    This paper describes the use of intelligent image processing as a machine guarding technology. One or more color, linear array cameras are positioned to view the critical region(s) around a machine tool or other piece of manufacturing equipment. The image data is processed to provide indicators of conditions dangerous to the equipment via color content, shape content, and motion content. The data from these analyses is then sent to a threat evaluator. The purpose of the evaluator is to determine if a potentially machine-damaging condition exists based on the analyses of color, shape, and motion, and on `knowledge' of the specific environment of the machine. The threat evaluator employs fuzzy logic as a means of dealing with uncertainty in the vision data.

  14. Basic design principles of colorimetric vision systems

    NASA Astrophysics Data System (ADS)

    Mumzhiu, Alex M.

    1998-10-01

    Color measurement is an important part of overall production quality control in textile, coating, plastics, food, paper and other industries. The color measurement instruments such as colorimeters and spectrophotometers, used for production quality control have many limitations. In many applications they cannot be used for a variety of reasons and have to be replaced with human operators. Machine vision has great potential for color measurement. The components for color machine vision systems, such as broadcast quality 3-CCD cameras, fast and inexpensive PCI frame grabbers, and sophisticated image processing software packages are available. However the machine vision industry has only started to approach the color domain. The few color machine vision systems on the market, produced by the largest machine vision manufacturers have very limited capabilities. A lack of understanding that a vision based color measurement system could fail if it ignores the basic principles of colorimetry is the main reason for the slow progress of color vision systems. the purpose of this paper is to clarify how color measurement principles have to be applied to vision systems and how the electro-optical design features of colorimeters have to be modified in order to implement them for vision systems. The subject of this presentation far exceeds the limitations of a journal paper so only the most important aspects will be discussed. An overview of the major areas of applications for colorimetric vision system will be discussed. Finally, the reasons why some customers are happy with their vision systems and some are not will be analyzed.

  15. Three-Dimensional Images For Robot Vision

    NASA Astrophysics Data System (ADS)

    McFarland, William D.

    1983-12-01

    Robots are attracting increased attention in the industrial productivity crisis. As one significant approach for this nation to maintain technological leadership, the need for robot vision has become critical. The "blind" robot, while occupying an economical niche at present is severely limited and job specific, being only one step up from the numerical controlled machines. To successfully satisfy robot vision requirements a three dimensional representation of a real scene must be provided. Several image acquistion techniques are discussed with more emphasis on the laser radar type instruments. The autonomous vehicle is also discussed as a robot form, and the requirements for these applications are considered. The total computer vision system requirement is reviewed with some discussion of the major techniques in the literature for three dimensional scene analysis.

  16. Integration of USB and firewire cameras in machine vision applications

    NASA Astrophysics Data System (ADS)

    Smith, Timothy E.; Britton, Douglas F.; Daley, Wayne D.; Carey, Richard

    1999-08-01

    Digital cameras have been around for many years, but a new breed of consumer market cameras is hitting the main stream. By using these devices, system designers and integrators will be well posited to take advantage of technological advances developed to support multimedia and imaging applications on the PC platform. Having these new cameras on the consumer market means lower cost, but it does not necessarily guarantee ease of integration. There are many issues that need to be accounted for like image quality, maintainable frame rates, image size and resolution, supported operating system, and ease of software integration. This paper will describe briefly a couple of the consumer digital standards, and then discuss some of the advantages and pitfalls of integrating both USB and Firewire cameras into computer/machine vision applications.

  17. Biomimetic machine vision system.

    PubMed

    Harman, William M; Barrett, Steven F; Wright, Cameron H G; Wilcox, Michael

    2005-01-01

    Real-time application of digital imaging for use in machine vision systems has proven to be prohibitive when used within control systems that employ low-power single processors without compromising the scope of vision or resolution of captured images. Development of a real-time machine analog vision system is the focus of research taking place at the University of Wyoming. This new vision system is based upon the biological vision system of the common house fly. Development of a single sensor is accomplished, representing a single facet of the fly's eye. This new sensor is then incorporated into an array of sensors capable of detecting objects and tracking motion in 2-D space. This system "preprocesses" incoming image data resulting in minimal data processing to determine the location of a target object. Due to the nature of the sensors in the array, hyperacuity is achieved thereby eliminating resolutions issues found in digital vision systems. In this paper, we will discuss the biological traits of the fly eye and the specific traits that led to the development of this machine vision system. We will also discuss the process of developing an analog based sensor that mimics the characteristics of interest in the biological vision system. This paper will conclude with a discussion of how an array of these sensors can be applied toward solving real-world machine vision issues.

  18. Task-focused modeling in automated agriculture

    NASA Astrophysics Data System (ADS)

    Vriesenga, Mark R.; Peleg, K.; Sklansky, Jack

    1993-01-01

    Machine vision systems analyze image data to carry out automation tasks. Our interest is in machine vision systems that rely on models to achieve their designed task. When the model is interrogated from an a priori menu of questions, the model need not be complete. Instead, the machine vision system can use a partial model that contains a large amount of information in regions of interest and less information elsewhere. We propose an adaptive modeling scheme for machine vision, called task-focused modeling, which constructs a model having just sufficient detail to carry out the specified task. The model is detailed in regions of interest to the task and is less detailed elsewhere. This focusing effect saves time and reduces the computational effort expended by the machine vision system. We illustrate task-focused modeling by an example involving real-time micropropagation of plants in automated agriculture.

  19. A Machine Vision System for Automatically Grading Hardwood Lumber - (Proceedings)

    Treesearch

    Richard W. Conners; Tai-Hoon Cho; Chong T. Ng; Thomas H. Drayer; Joe G. Tront; Philip A. Araman; Robert L. Brisbon

    1990-01-01

    Any automatic system for grading hardwood lumber can conceptually be divided into two components. One of these is a machine vision system for locating and identifying grading defects. The other is an automatic grading program that accepts as input the output of the machine vision system and, based on these data, determines the grade of a board. The progress that has...

  20. Using Multiple FPGA Architectures for Real-time Processing of Low-level Machine Vision Functions

    Treesearch

    Thomas H. Drayer; William E. King; Philip A. Araman; Joseph G. Tront; Richard W. Conners

    1995-01-01

    In this paper, we investigate the use of multiple Field Programmable Gate Array (FPGA) architectures for real-time machine vision processing. The use of FPGAs for low-level processing represents an excellent tradeoff between software and special purpose hardware implementations. A library of modules that implement common low-level machine vision operations is presented...

  1. Machine vision for real time orbital operations

    NASA Technical Reports Server (NTRS)

    Vinz, Frank L.

    1988-01-01

    Machine vision for automation and robotic operation of Space Station era systems has the potential for increasing the efficiency of orbital servicing, repair, assembly and docking tasks. A machine vision research project is described in which a TV camera is used for inputing visual data to a computer so that image processing may be achieved for real time control of these orbital operations. A technique has resulted from this research which reduces computer memory requirements and greatly increases typical computational speed such that it has the potential for development into a real time orbital machine vision system. This technique is called AI BOSS (Analysis of Images by Box Scan and Syntax).

  2. Knowledge-based vision and simple visual machines.

    PubMed Central

    Cliff, D; Noble, J

    1997-01-01

    The vast majority of work in machine vision emphasizes the representation of perceived objects and events: it is these internal representations that incorporate the 'knowledge' in knowledge-based vision or form the 'models' in model-based vision. In this paper, we discuss simple machine vision systems developed by artificial evolution rather than traditional engineering design techniques, and note that the task of identifying internal representations within such systems is made difficult by the lack of an operational definition of representation at the causal mechanistic level. Consequently, we question the nature and indeed the existence of representations posited to be used within natural vision systems (i.e. animals). We conclude that representations argued for on a priori grounds by external observers of a particular vision system may well be illusory, and are at best place-holders for yet-to-be-identified causal mechanistic interactions. That is, applying the knowledge-based vision approach in the understanding of evolved systems (machines or animals) may well lead to theories and models that are internally consistent, computationally plausible, and entirely wrong. PMID:9304684

  3. Machine vision system for inspecting characteristics of hybrid rice seed

    NASA Astrophysics Data System (ADS)

    Cheng, Fang; Ying, Yibin

    2004-03-01

    Obtaining clear images advantaged of improving the classification accuracy involves many factors, light source, lens extender and background were discussed in this paper. The analysis of rice seed reflectance curves showed that the wavelength of light source for discrimination of the diseased seeds from normal rice seeds in the monochromic image recognition mode was about 815nm for jinyou402 and shanyou10. To determine optimizing conditions for acquiring digital images of rice seed using a computer vision system, an adjustable color machine vision system was developed. The machine vision system with 20mm to 25mm lens extender produce close-up images which made it easy to object recognition of characteristics in hybrid rice seeds. White background was proved to be better than black background for inspecting rice seeds infected by disease and using the algorithms based on shape. Experimental results indicated good classification for most of the characteristics with the machine vision system. The same algorithm yielded better results in optimizing condition for quality inspection of rice seed. Specifically, the image processing can correct for details such as fine fissure with the machine vision system.

  4. A Machine Vision System for Automatically Grading Hardwood Lumber - (Industrial Metrology)

    Treesearch

    Richard W. Conners; Tai-Hoon Cho; Chong T. Ng; Thomas T. Drayer; Philip A. Araman; Robert L. Brisbon

    1992-01-01

    Any automatic system for grading hardwood lumber can conceptually be divided into two components. One of these is a machine vision system for locating and identifying grading defects. The other is an automatic grading program that accepts as input the output of the machine vision system and, based on these data, determines the grade of a board. The progress that has...

  5. Precise positioning method for multi-process connecting based on binocular vision

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Ding, Lichao; Zhao, Kai; Li, Xiao; Wang, Ling; Jia, Zhenyuan

    2016-01-01

    With the rapid development of aviation and aerospace, the demand for metal coating parts such as antenna reflector, eddy-current sensor and signal transmitter, etc. is more and more urgent. Such parts with varied feature dimensions, complex three-dimensional structures, and high geometric accuracy are generally fabricated by the combination of different manufacturing technology. However, it is difficult to ensure the machining precision because of the connection error between different processing methods. Therefore, a precise positioning method is proposed based on binocular micro stereo vision in this paper. Firstly, a novel and efficient camera calibration method for stereoscopic microscope is presented to solve the problems of narrow view field, small depth of focus and too many nonlinear distortions. Secondly, the extraction algorithms for law curve and free curve are given, and the spatial position relationship between the micro vision system and the machining system is determined accurately. Thirdly, a precise positioning system based on micro stereovision is set up and then embedded in a CNC machining experiment platform. Finally, the verification experiment of the positioning accuracy is conducted and the experimental results indicated that the average errors of the proposed method in the X and Y directions are 2.250 μm and 1.777 μm, respectively.

  6. Feasibility of Robotics and Machine Vision in Military Combat Ration Inspection (Short Term Project STP No. 11)

    DTIC Science & Technology

    1994-06-01

    signals. Industrial robot controllers have several general purpose ports which can be programmed within manipulator program. In this way the gen ri...well as a fanc - tional end- effector was developed and evaluated. The workcell was found technologically feasible; however, further experimental work

  7. Machine Learning and Computer Vision System for Phenotype Data Acquisition and Analysis in Plants.

    PubMed

    Navarro, Pedro J; Pérez, Fernando; Weiss, Julia; Egea-Cortines, Marcos

    2016-05-05

    Phenomics is a technology-driven approach with promising future to obtain unbiased data of biological systems. Image acquisition is relatively simple. However data handling and analysis are not as developed compared to the sampling capacities. We present a system based on machine learning (ML) algorithms and computer vision intended to solve the automatic phenotype data analysis in plant material. We developed a growth-chamber able to accommodate species of various sizes. Night image acquisition requires near infrared lightning. For the ML process, we tested three different algorithms: k-nearest neighbour (kNN), Naive Bayes Classifier (NBC), and Support Vector Machine. Each ML algorithm was executed with different kernel functions and they were trained with raw data and two types of data normalisation. Different metrics were computed to determine the optimal configuration of the machine learning algorithms. We obtained a performance of 99.31% in kNN for RGB images and a 99.34% in SVM for NIR. Our results show that ML techniques can speed up phenomic data analysis. Furthermore, both RGB and NIR images can be segmented successfully but may require different ML algorithms for segmentation.

  8. Machine Learning, deep learning and optimization in computer vision

    NASA Astrophysics Data System (ADS)

    Canu, Stéphane

    2017-03-01

    As quoted in the Large Scale Computer Vision Systems NIPS workshop, computer vision is a mature field with a long tradition of research, but recent advances in machine learning, deep learning, representation learning and optimization have provided models with new capabilities to better understand visual content. The presentation will go through these new developments in machine learning covering basic motivations, ideas, models and optimization in deep learning for computer vision, identifying challenges and opportunities. It will focus on issues related with large scale learning that is: high dimensional features, large variety of visual classes, and large number of examples.

  9. Vision Based Autonomous Robotic Control for Advanced Inspection and Repair

    NASA Technical Reports Server (NTRS)

    Wehner, Walter S.

    2014-01-01

    The advanced inspection system is an autonomous control and analysis system that improves the inspection and remediation operations for ground and surface systems. It uses optical imaging technology with intelligent computer vision algorithms to analyze physical features of the real-world environment to make decisions and learn from experience. The advanced inspection system plans to control a robotic manipulator arm, an unmanned ground vehicle and cameras remotely, automatically and autonomously. There are many computer vision, image processing and machine learning techniques available as open source for using vision as a sensory feedback in decision-making and autonomous robotic movement. My responsibilities for the advanced inspection system are to create a software architecture that integrates and provides a framework for all the different subsystem components; identify open-source algorithms and techniques; and integrate robot hardware.

  10. Automated hardwood lumber grading utilizing a multiple sensor machine vision technology

    Treesearch

    D. Earl Kline; Chris Surak; Philip A. Araman

    2003-01-01

    Over the last 10 years, scientists at the Thomas M. Brooks Forest Products Center, the Bradley Department of Electrical and Computer Engineering, and the USDA Forest Service have been working on lumber scanning systems that can accurately locate and identify defects in hardwood lumber. Current R&D efforts are targeted toward developing automated lumber grading...

  11. 11th Annual Intelligent Ground Vehicle Competition: team approaches to intelligent driving and machine vision

    NASA Astrophysics Data System (ADS)

    Theisen, Bernard L.; Lane, Gerald R.

    2003-10-01

    The Intelligent Ground Vehicle Competition (IGVC) is one of three, unmanned systems, student competitions that were founded by the Association for Unmanned Vehicle Systems International (AUVSI) in the 1990's. The IGVC is a multidisciplinary exercise in product realization that challenges college engineering student teams to integrate advanced control theory, machine vision, vehicular electronics, and mobile platform fundamentals to design and build an unmanned system. Both the U.S. and international teams focus on developing a suite of dual-use technologies to equip ground vehicles of the future with intelligtent driving capabilities. Over the past 11 years, the competition has challenged both undergraduates and graduates, including Ph.D. students with real world applications in intelligent transportation systems, the military, and manufacturing automation. To date, teams from over 40 universities and colleges have participated. In this paper, we describe some of the applications of the technologies required by this competition, and discuss the educational benefits. The primary goal of the IGVC is to advance engineering education in intelligent vehicles and related technologies. The employment and professional networking opportunities created for students and industrial sponsors through a series of technical events over the three-day competition are highlighted. Finally, an assessment of the competition based on participant feedback is presented.

  12. Neural Networks for Computer Vision: A Framework for Specifications of a General Purpose Vision System

    NASA Astrophysics Data System (ADS)

    Skrzypek, Josef; Mesrobian, Edmond; Gungner, David J.

    1989-03-01

    The development of autonomous land vehicles (ALV) capable of operating in an unconstrained environment has proven to be a formidable research effort. The unpredictability of events in such an environment calls for the design of a robust perceptual system, an impossible task requiring the programming of a system bases on the expectation of future, unconstrained events. Hence, the need for a "general purpose" machine vision system that is capable of perceiving and understanding images in an unconstrained environment in real-time. The research undertaken at the UCLA Machine Perception Laboratory addresses this need by focusing on two specific issues: 1) the long term goals for machine vision research as a joint effort between the neurosciences and computer science; and 2) a framework for evaluating progress in machine vision. In the past, vision research has been carried out independently within different fields including neurosciences, psychology, computer science, and electrical engineering. Our interdisciplinary approach to vision research is based on the rigorous combination of computational neuroscience, as derived from neurophysiology and neuropsychology, with computer science and electrical engineering. The primary motivation behind our approach is that the human visual system is the only existing example of a "general purpose" vision system and using a neurally based computing substrate, it can complete all necessary visual tasks in real-time.

  13. Sensory Interactive Teleoperator Robotic Grasping

    NASA Technical Reports Server (NTRS)

    Alark, Keli; Lumia, Ron

    1997-01-01

    As the technological world strives for efficiency, the need for economical equipment that increases operator proficiency in minimal time is fundamental. This system links a CCD camera, a controller and a robotic arm to a computer vision system to provide an alternative method of image analysis. The machine vision system which was employed possesses software tools for acquiring and analyzing images which are received through a CCD camera. After feature extraction on the object in the image was performed, information about the object's location, orientation and distance from the robotic gripper is sent to the robot controller so that the robot can manipulate the object.

  14. A Machine Vision Quality Control System for Industrial Acrylic Fibre Production

    NASA Astrophysics Data System (ADS)

    Heleno, Paulo; Davies, Roger; Correia, Bento A. Brázio; Dinis, João

    2002-12-01

    This paper describes the implementation of INFIBRA, a machine vision system used in the quality control of acrylic fibre production. The system was developed by INETI under a contract with a leading industrial manufacturer of acrylic fibres. It monitors several parameters of the acrylic production process. This paper presents, after a brief overview of the system, a detailed description of the machine vision algorithms developed to perform the inspection tasks unique to this system. Some of the results of online operation are also presented.

  15. Machine Vision Systems for Processing Hardwood Lumber and Logs

    Treesearch

    Philip A. Araman; Daniel L. Schmoldt; Tai-Hoon Cho; Dongping Zhu; Richard W. Conners; D. Earl Kline

    1992-01-01

    Machine vision and automated processing systems are under development at Virginia Tech University with support and cooperation from the USDA Forest Service. Our goals are to help U.S. hardwood producers automate, reduce costs, increase product volume and value recovery, and market higher value, more accurately graded and described products. Any vision system is...

  16. Robust Spatial Autoregressive Modeling for Hardwood Log Inspection

    Treesearch

    Dongping Zhu; A.A. Beex

    1994-01-01

    We explore the application of a stochastic texture modeling method toward a machine vision system for log inspection in the forest products industry. This machine vision system uses computerized tomography (CT) imaging to locate and identify internal defects in hardwood logs. The application of CT to such industrial vision problems requires efficient and robust image...

  17. Development of a Machine-Vision System for Recording of Force Calibration Data

    NASA Astrophysics Data System (ADS)

    Heamawatanachai, Sumet; Chaemthet, Kittipong; Changpan, Tawat

    This paper presents the development of a new system for recording of force calibration data using machine vision technology. Real time camera and computer system were used to capture images of the reading from the instruments during calibration. Then, the measurement images were transformed and translated to numerical data using optical character recognition (OCR) technique. These numerical data along with raw images were automatically saved to memories as the calibration database files. With this new system, the human error of recording would be eliminated. The verification experiments were done by using this system for recording the measurement results from an amplifier (DMP 40) with load cell (HBM-Z30-10kN). The NIMT's 100-kN deadweight force standard machine (DWM-100kN) was used to generate test forces. The experiments setup were done in 3 categories; 1) dynamics condition (record during load changing), 2) statics condition (record during fix load), and 3) full calibration experiments in accordance with ISO 376:2011. The captured images from dynamics condition experiment gave >94% without overlapping of number. The results from statics condition experiment were >98% images without overlapping. All measurement images without overlapping were translated to number by the developed program with 100% accuracy. The full calibration experiments also gave 100% accurate results. Moreover, in case of incorrect translation of any result, it is also possible to trace back to the raw calibration image to check and correct it. Therefore, this machine-vision-based system and program should be appropriate for recording of force calibration data.

  18. Computer vision challenges and technologies for agile manufacturing

    NASA Astrophysics Data System (ADS)

    Molley, Perry A.

    1996-02-01

    Sandia National Laboratories, a Department of Energy laboratory, is responsible for maintaining the safety, security, reliability, and availability of the nuclear weapons stockpile for the United States. Because of the changing national and global political climates and inevitable budget cuts, Sandia is changing the methods and processes it has traditionally used in the product realization cycle for weapon components. Because of the increasing age of the nuclear stockpile, it is certain that the reliability of these weapons will degrade with time unless eventual action is taken to repair, requalify, or renew them. Furthermore, due to the downsizing of the DOE weapons production sites and loss of technical personnel, the new product realization process is being focused on developing and deploying advanced automation technologies in order to maintain the capability for producing new components. The goal of Sandia's technology development program is to create a product realization environment that is cost effective, has improved quality and reduced cycle time for small lot sizes. The new environment will rely less on the expertise of humans and more on intelligent systems and automation to perform the production processes. The systems will be robust in order to provide maximum flexibility and responsiveness for rapidly changing component or product mixes. An integrated enterprise will allow ready access to and use of information for effective and efficient product and process design. Concurrent engineering methods will allow a speedup of the product realization cycle, reduce costs, and dramatically lessen the dependency on creating and testing physical prototypes. Virtual manufacturing will allow production processes to be designed, integrated, and programed off-line before a piece of hardware ever moves. The overriding goal is to be able to build a large variety of new weapons parts on short notice. Many of these technologies that are being developed are also applicable to commercial production processes and applications. Computer vision will play a critical role in the new agile production environment for automation of processes such as inspection, assembly, welding, material dispensing and other process control tasks. Although there are many academic and commercial solutions that have been developed, none have had widespread adoption considering the huge potential number of applications that could benefit from this technology. The reason for this slow adoption is that the advantages of computer vision for automation can be a double-edged sword. The benefits can be lost if the vision system requires an inordinate amount of time for reprogramming by a skilled operator to account for different parts, changes in lighting conditions, background clutter, changes in optics, etc. Commercially available solutions typically require an operator to manually program the vision system with features used for the recognition. In a recent survey, we asked a number of commercial manufacturers and machine vision companies the question, 'What prevents machine vision systems from being more useful in factories?' The number one (and unanimous) response was that vision systems require too much skill to set up and program to be cost effective.

  19. Parallel Algorithms for Computer Vision

    DTIC Science & Technology

    1990-04-01

    NA86-1, Thinking Machines Corporation, Cambridge, MA, December 1986. [43] J. Little, G. Blelloch, and T. Cass. How to program the connection machine for... to program the connection machine for computer vision. In Proc. Workshop on Comp. Architecture for Pattern Analysis and Machine Intell., 1987. [92] J...In Proceedings of SPIE Conf. on Advances in Intelligent Robotics Systems, Bellingham, VA, 1987. SPIE. [91] J. Little, G. Blelloch, and T. Cass. How

  20. Quantum vision in three dimensions

    NASA Astrophysics Data System (ADS)

    Roth, Yehuda

    We present four models for describing a 3-D vision. Similar to the mirror scenario, our models allow 3-D vision with no need for additional accessories such as stereoscopic glasses or a hologram film. These four models are based on brain interpretation rather than pure objective encryption. We consider the observer "subjective" selection of a measuring device and the corresponding quantum collapse into one of his selected states, as a tool for interpreting reality in according to the observer concepts. This is the basic concept of our study and it is introduced in the first model. Other models suggests "soften" versions that might be much easier to implement. Our quantum interpretation approach contribute to the following fields. In technology the proposed models can be implemented into real devices, allowing 3-D vision without additional accessories. Artificial intelligence: In the desire to create a machine that exchange information by using human terminologies, our interpretation approach seems to be appropriate.

  1. Color image processing and vision system for an automated laser paint-stripping system

    NASA Astrophysics Data System (ADS)

    Hickey, John M., III; Hise, Lawson

    1994-10-01

    Color image processing in machine vision systems has not gained general acceptance. Most machine vision systems use images that are shades of gray. The Laser Automated Decoating System (LADS) required a vision system which could discriminate between substrates of various colors and textures and paints ranging from semi-gloss grays to high gloss red, white and blue (Air Force Thunderbirds). The changing lighting levels produced by the pulsed CO2 laser mandated a vision system that did not require a constant color temperature lighting for reliable image analysis.

  2. Nondestructive evaluation of hardwood logs:CT scanning, machine vision and data utilization

    Treesearch

    Daniel L. Schmoldt; Luis G. Occena; A. Lynn Abbott; Nand K. Gupta

    1999-01-01

    Sawing of hardwood logs still relies on relatively simple technologies that, in spite of their lack of sophistication, have been successful for many years due to wood?s traditional low cost and ready availability. These characteristics of the hardwood resource have changed dramatically over the past 20 years, however, forcing wood processors to become more efficient in...

  3. Characterization of Defects in Lumber Using Color, Shape, and Density Information

    Treesearch

    B.H. Bond; D. Earl Kline; Philip A. Araman

    1998-01-01

    To help guide the development of multi-sensor machine vision systems for defect detection in lumber, a fundamental understanding of wood defects is needed. The purpose of this research was to advance the basic understanding of defects in lumber by describing them in terms of parameters that can be derived from color and x-ray scanning technologies and to demonstrate...

  4. Evaluation of a multi-sensor machine vision system for automated hardwood lumber grading

    Treesearch

    D. Earl Kline; Chris Surak; Philip A. Araman

    2000-01-01

    Over the last 10 years, scientists at the Thomas M. Brooks Forest Products Center, the Bradley Department of Electrical Engineering, and the USDA Forest Service have been working on lumber scanning systems that can accurately locate and identify defects in hardwood lumber. Current R&D efforts are targeted toward developing automated lumber grading technologies. The...

  5. Machine learning and computer vision approaches for phenotypic profiling.

    PubMed

    Grys, Ben T; Lo, Dara S; Sahin, Nil; Kraus, Oren Z; Morris, Quaid; Boone, Charles; Andrews, Brenda J

    2017-01-02

    With recent advances in high-throughput, automated microscopy, there has been an increased demand for effective computational strategies to analyze large-scale, image-based data. To this end, computer vision approaches have been applied to cell segmentation and feature extraction, whereas machine-learning approaches have been developed to aid in phenotypic classification and clustering of data acquired from biological images. Here, we provide an overview of the commonly used computer vision and machine-learning methods for generating and categorizing phenotypic profiles, highlighting the general biological utility of each approach. © 2017 Grys et al.

  6. Machine learning and computer vision approaches for phenotypic profiling

    PubMed Central

    Morris, Quaid

    2017-01-01

    With recent advances in high-throughput, automated microscopy, there has been an increased demand for effective computational strategies to analyze large-scale, image-based data. To this end, computer vision approaches have been applied to cell segmentation and feature extraction, whereas machine-learning approaches have been developed to aid in phenotypic classification and clustering of data acquired from biological images. Here, we provide an overview of the commonly used computer vision and machine-learning methods for generating and categorizing phenotypic profiles, highlighting the general biological utility of each approach. PMID:27940887

  7. Trends and developments in industrial machine vision: 2013

    NASA Astrophysics Data System (ADS)

    Niel, Kurt; Heinzl, Christoph

    2014-03-01

    When following current advancements and implementations in the field of machine vision there seems to be no borders for future developments: Calculating power constantly increases, and new ideas are spreading and previously challenging approaches are introduced in to mass market. Within the past decades these advances have had dramatic impacts on our lives. Consumer electronics, e.g. computers or telephones, which once occupied large volumes, now fit in the palm of a hand. To note just a few examples e.g. face recognition was adopted by the consumer market, 3D capturing became cheap, due to the huge community SW-coding got easier using sophisticated development platforms. However, still there is a remaining gap between consumer and industrial applications. While the first ones have to be entertaining, the second have to be reliable. Recent studies (e.g. VDMA [1], Germany) show a moderately increasing market for machine vision in industry. Asking industry regarding their needs the main challenges for industrial machine vision are simple usage and reliability for the process, quick support, full automation, self/easy adjustment at changing process parameters, "forget it in the line". Furthermore a big challenge is to support quality control: Nowadays the operator has to accurately define the tested features for checking the probes. There is an upcoming development also to let automated machine vision applications find out essential parameters in a more abstract level (top down). In this work we focus on three current and future topics for industrial machine vision: Metrology supporting automation, quality control (inline/atline/offline) as well as visualization and analysis of datasets with steadily growing sizes. Finally the general trend of the pixel orientated towards object orientated evaluation is addressed. We do not directly address the field of robotics taking advances from machine vision. This is actually a fast changing area which is worth an own contribution.

  8. 75 FR 71146 - In the Matter of Certain Machine Vision Software, Machine Vision Systems, and Products Containing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-22

    ...''); Amistar Automation, Inc. (``Amistar'') of San Marcos, California; Techno Soft Systemnics, Inc. (``Techno..., the ALJ's construction of the claim terms ``test,'' ``match score surface,'' and ``gradient direction...

  9. Electronic Nose and Electronic Tongue

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Nabarun; Bandhopadhyay, Rajib

    Human beings have five senses, namely, vision, hearing, touch, smell and taste. The sensors for vision, hearing and touch have been developed for several years. The need for sensors capable of mimicking the senses of smell and taste have been felt only recently in food industry, environmental monitoring and several industrial applications. In the ever-widening horizon of frontier research in the field of electronics and advanced computing, emergence of electronic nose (E-Nose) and electronic tongue (E-Tongue) have been drawing attention of scientists and technologists for more than a decade. By intelligent integration of multitudes of technologies like chemometrics, microelectronics and advanced soft computing, human olfaction has been successfully mimicked by such new techniques called machine olfaction (Pearce et al. 2002). But the very essence of such research and development efforts has centered on development of customized electronic nose and electronic tongue solutions specific to individual applications. In fact, research trends as of date clearly points to the fact that a machine olfaction system as versatile, universal and broadband as human nose and human tongue may not be feasible in the decades to come. But application specific solutions may definitely be demonstrated and commercialized by modulation in sensor design and fine-tuning the soft computing solutions. This chapter deals with theory, developments of E-Nose and E-Tongue technology and their applications. Also a succinct account of future trends of R&D efforts in this field with an objective of establishing co-relation between machine olfaction and human perception has been included.

  10. Cohort: critical science

    NASA Astrophysics Data System (ADS)

    Digney, Bruce L.

    2007-04-01

    Unmanned vehicle systems is an attractive technology for the military, but whose promises have remained largely undelivered. There currently exist fielded remote controlled UGVs and high altitude UAV whose benefits are based on standoff in low complexity environments with sufficiently low control reaction time requirements to allow for teleoperation. While effective within there limited operational niche such systems do not meet with the vision of future military UxV scenarios. Such scenarios envision unmanned vehicles operating effectively in complex environments and situations with high levels of independence and effective coordination with other machines and humans pursing high level, changing and sometimes conflicting goals. While these aims are clearly ambitious they do provide necessary targets and inspiration with hopes of fielding near term useful semi-autonomous unmanned systems. Autonomy involves many fields of research including machine vision, artificial intelligence, control theory, machine learning and distributed systems all of which are intertwined and have goals of creating more versatile broadly applicable algorithms. Cohort is a major Applied Research Program (ARP) led by Defence R&D Canada (DRDC) Suffield and its aim is to develop coordinated teams of unmanned vehicles (UxVs) for urban environments. This paper will discuss the critical science being addressed by DRDC developing semi-autonomous systems.

  11. A Multiple Sensor Machine Vision System Technology for the Hardwood

    Treesearch

    Richard W. Conners; D.Earl Kline; Philip A. Araman

    1995-01-01

    For the last few years the authors have been extolling the virtues of a multiple sensor approach to hardwood defect detection. Since 1989 the authors have actively been trying to develop such a system. This paper details some of the successes and failures that have been experienced to date. It also discusses what remains to be done and gives time lines for the...

  12. Machine Vision For Industrial Control:The Unsung Opportunity

    NASA Astrophysics Data System (ADS)

    Falkman, Gerald A.; Murray, Lawrence A.; Cooper, James E.

    1984-05-01

    Vision modules have primarily been developed to relieve those pressures newly brought into existence by Inspection (QUALITY) and Robotic (PRODUCTIVITY) mandates. Industrial Control pressure stems on the other hand from the older first industrial revolution mandate of throughput. Satisfying such pressure calls for speed in both imaging and decision making. Vision companies have, however, put speed on a backburner or ignore it entirely because most modules are computer/software based which limits their speed potential. Increasingly, the keynote being struck at machine vision seminars is that "Visual and Computational Speed Must Be Increased and Dramatically!" There are modular hardwired-logic systems that are fast but, all too often, they are not very bright. Such units: Measure the fill factor of bottles as they spin by, Read labels on cans, Count stacked plastic cups or Monitor the width of parts streaming past the camera. Many are only a bit more complex than a photodetector. Once in place, most of these units are incapable of simple upgrading to a new task and are Vision's analog to the robot industry's pick and place (RIA TYPE E) robot. Vision thus finds itself amidst the same quandries that once beset the Robot Industry of America when it tried to define a robot, excluded dumb ones, and was left with only slow machines whose unit volume potential is shatteringly low. This paper develops an approach to meeting the need of a vision system that cuts a swath into the terra incognita of intelligent, high-speed vision processing. Main attention is directed to vision for industrial control. Some presently untapped vision application areas that will be serviced include: Electronics, Food, Sports, Pharmaceuticals, Machine Tools and Arc Welding.

  13. Calibrators measurement system for headlamp tester of motor vehicle base on machine vision

    NASA Astrophysics Data System (ADS)

    Pan, Yue; Zhang, Fan; Xu, Xi-ping; Zheng, Zhe

    2014-09-01

    With the development of photoelectric detection technology, machine vision has a wider use in the field of industry. The paper mainly introduces auto lamps tester calibrator measuring system, of which CCD image sampling system is the core. Also, it shows the measuring principle of optical axial angle and light intensity, and proves the linear relationship between calibrator's facula illumination and image plane illumination. The paper provides an important specification of CCD imaging system. Image processing by MATLAB can get flare's geometric midpoint and average gray level. By fitting the statistics via the method of the least square, we can get regression equation of illumination and gray level. It analyzes the error of experimental result of measurement system, and gives the standard uncertainty of synthesis and the resource of optical axial angle. Optical axial angle's average measuring accuracy is controlled within 40''. The whole testing process uses digital means instead of artificial factors, which has higher accuracy, more repeatability and better mentality than any other measuring systems.

  14. Machine vision system for online inspection of freshly slaughtered chickens

    USDA-ARS?s Scientific Manuscript database

    A machine vision system was developed and evaluated for the automation of online inspection to differentiate freshly slaughtered wholesome chickens from systemically diseased chickens. The system consisted of an electron-multiplying charge-coupled-device camera used with an imaging spectrograph and ...

  15. Embedded Systems and TensorFlow Frameworks as Assistive Technology Solutions.

    PubMed

    Mulfari, Davide; Palla, Alessandro; Fanucci, Luca

    2017-01-01

    In the field of deep learning, this paper presents the design of a wearable computer vision system for visually impaired users. The Assistive Technology solution exploits a powerful single board computer and smart glasses with a camera in order to allow its user to explore the objects within his surrounding environment, while it employs Google TensorFlow machine learning framework in order to real time classify the acquired stills. Therefore the proposed aid can increase the awareness of the explored environment and it interacts with its user by means of audio messages.

  16. Automation of the CCTV-mediated detection of individuals illegally carrying firearms: combining psychological and technological approaches

    NASA Astrophysics Data System (ADS)

    Darker, Iain T.; Kuo, Paul; Yang, Ming Yuan; Blechko, Anastassia; Grecos, Christos; Makris, Dimitrios; Nebel, Jean-Christophe; Gale, Alastair G.

    2009-05-01

    Findings from the current UK national research programme, MEDUSA (Multi Environment Deployable Universal Software Application), are presented. MEDUSA brings together two approaches to facilitate the design of an automatic, CCTV-based firearm detection system: psychological-to elicit strategies used by CCTV operators; and machine vision-to identify key cues derived from camera imagery. Potentially effective human- and machine-based strategies have been identified; these will form elements of the final system. The efficacies of these algorithms have been tested on staged CCTV footage in discriminating between firearms and matched distractor objects. Early results indicate the potential for this combined approach.

  17. Knowledge-based machine vision systems for space station automation

    NASA Technical Reports Server (NTRS)

    Ranganath, Heggere S.; Chipman, Laure J.

    1989-01-01

    Computer vision techniques which have the potential for use on the space station and related applications are assessed. A knowledge-based vision system (expert vision system) and the development of a demonstration system for it are described. This system implements some of the capabilities that would be necessary in a machine vision system for the robot arm of the laboratory module in the space station. A Perceptics 9200e image processor, on a host VAXstation, was used to develop the demonstration system. In order to use realistic test images, photographs of actual space shuttle simulator panels were used. The system's capabilities of scene identification and scene matching are discussed.

  18. Operator-coached machine vision for space telerobotics

    NASA Technical Reports Server (NTRS)

    Bon, Bruce; Wilcox, Brian; Litwin, Todd; Gennery, Donald B.

    1991-01-01

    A prototype system for interactive object modeling has been developed and tested. The goal of this effort has been to create a system which would demonstrate the feasibility of high interactive operator-coached machine vision in a realistic task environment, and to provide a testbed for experimentation with various modes of operator interaction. The purpose for such a system is to use human perception where machine vision is difficult, i.e., to segment the scene into objects and to designate their features, and to use machine vision to overcome limitations of human perception, i.e., for accurate measurement of object geometry. The system captures and displays video images from a number of cameras, allows the operator to designate a polyhedral object one edge at a time by moving a 3-D cursor within these images, performs a least-squares fit of the designated edges to edge data detected with a modified Sobel operator, and combines the edges thus detected to form a wire-frame object model that matches the Sobel data.

  19. Computer Vision Malaria Diagnostic Systems-Progress and Prospects.

    PubMed

    Pollak, Joseph Joel; Houri-Yafin, Arnon; Salpeter, Seth J

    2017-01-01

    Accurate malaria diagnosis is critical to prevent malaria fatalities, curb overuse of antimalarial drugs, and promote appropriate management of other causes of fever. While several diagnostic tests exist, the need for a rapid and highly accurate malaria assay remains. Microscopy and rapid diagnostic tests are the main diagnostic modalities available, yet they can demonstrate poor performance and accuracy. Automated microscopy platforms have the potential to significantly improve and standardize malaria diagnosis. Based on image recognition and machine learning algorithms, these systems maintain the benefits of light microscopy and provide improvements such as quicker scanning time, greater scanning area, and increased consistency brought by automation. While these applications have been in development for over a decade, recently several commercial platforms have emerged. In this review, we discuss the most advanced computer vision malaria diagnostic technologies and investigate several of their features which are central to field use. Additionally, we discuss the technological and policy barriers to implementing these technologies in low-resource settings world-wide.

  20. Machine vision and appearance based learning

    NASA Astrophysics Data System (ADS)

    Bernstein, Alexander

    2017-03-01

    Smart algorithms are used in Machine vision to organize or extract high-level information from the available data. The resulted high-level understanding the content of images received from certain visual sensing system and belonged to an appearance space can be only a key first step in solving various specific tasks such as mobile robot navigation in uncertain environments, road detection in autonomous driving systems, etc. Appearance-based learning has become very popular in the field of machine vision. In general, the appearance of a scene is a function of the scene content, the lighting conditions, and the camera position. Mobile robots localization problem in machine learning framework via appearance space analysis is considered. This problem is reduced to certain regression on an appearance manifold problem, and newly regression on manifolds methods are used for its solution.

  1. Being human in a global age of technology.

    PubMed

    Whelton, Beverly J B

    2016-01-01

    This philosophical enquiry considers the impact of a global world view and technology on the meaning of being human. The global vision increases our awareness of the common bond between all humans, while technology tends to separate us from an understanding of ourselves as human persons. We review some advances in connecting as community within our world, and many examples of technological changes. This review is not exhaustive. The focus is to understand enough changes to think through the possibility of healthcare professionals becoming cyborgs, human-machine units that are subsequently neither human and nor machine. It is seen that human technology interfaces are a different way of interacting but do not change what it is to be human in our rational capacities of providing meaningful speech and freely chosen actions. In the highly technical environment of the ICU, expert nurses work in harmony with both the technical equipment and the patient. We used Heidegger to consider the nature of equipment, and Descartes to explore unique human capacities. Aristotle, Wallace, Sokolowski, and Clarke provide a summary of humanity as substantial and relational. © 2015 John Wiley & Sons Ltd.

  2. Investigation into the use of smartphone as a machine vision device for engineering metrology and flaw detection, with focus on drilling

    NASA Astrophysics Data System (ADS)

    Razdan, Vikram; Bateman, Richard

    2015-05-01

    This study investigates the use of a Smartphone and its camera vision capabilities in Engineering metrology and flaw detection, with a view to develop a low cost alternative to Machine vision systems which are out of range for small scale manufacturers. A Smartphone has to provide a similar level of accuracy as Machine Vision devices like Smart cameras. The objective set out was to develop an App on an Android Smartphone, incorporating advanced Computer vision algorithms written in java code. The App could then be used for recording measurements of Twist Drill bits and hole geometry, and analysing the results for accuracy. A detailed literature review was carried out for in-depth study of Machine vision systems and their capabilities, including a comparison between the HTC One X Android Smartphone and the Teledyne Dalsa BOA Smart camera. A review of the existing metrology Apps in the market was also undertaken. In addition, the drilling operation was evaluated to establish key measurement parameters of a twist Drill bit, especially flank wear and diameter. The methodology covers software development of the Android App, including the use of image processing algorithms like Gaussian Blur, Sobel and Canny available from OpenCV software library, as well as designing and developing the experimental set-up for carrying out the measurements. The results obtained from the experimental set-up were analysed for geometry of Twist Drill bits and holes, including diametrical measurements and flaw detection. The results show that Smartphones like the HTC One X have the processing power and the camera capability to carry out metrological tasks, although dimensional accuracy achievable from the Smartphone App is below the level provided by Machine vision devices like Smart cameras. A Smartphone with mechanical attachments, capable of image processing and having a reasonable level of accuracy in dimensional measurement, has the potential to become a handy low-cost Machine vision system for small scale manufacturers, especially in field metrology and flaw detection.

  3. Automatic detection and counting of cattle in UAV imagery based on machine vision technology (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Rahnemoonfar, Maryam; Foster, Jamie; Starek, Michael J.

    2017-05-01

    Beef production is the main agricultural industry in Texas, and livestock are managed in pasture and rangeland which are usually huge in size, and are not easily accessible by vehicles. The current research method for livestock location identification and counting is visual observation which is very time consuming and costly. For animals on large tracts of land, manned aircraft may be necessary to count animals which is noisy and disturbs the animals, and may introduce a source of error in counts. Such manual approaches are expensive, slow and labor intensive. In this paper we study the combination of small unmanned aerial vehicle (sUAV) and machine vision technology as a valuable solution to manual animal surveying. A fixed-wing UAV fitted with GPS and digital RGB camera for photogrammetry was flown at the Welder Wildlife Foundation in Sinton, TX. Over 600 acres were flown with four UAS flights and individual photographs used to develop orthomosaic imagery. To detect animals in UAV imagery, a fully automatic technique was developed based on spatial and spectral characteristics of objects. This automatic technique can even detect small animals that are partially occluded by bushes. Experimental results in comparison to ground-truth show the effectiveness of our algorithm.

  4. Square tracking sensor for autonomous helicopter hover stabilization

    NASA Astrophysics Data System (ADS)

    Oertel, Carl-Henrik

    1995-06-01

    Sensors for synthetic vision are needed to extend the mission profiles of helicopters. A special task for various applications is the autonomous position hold of a helicopter above a ground fixed or moving target. As a proof of concept for a general synthetic vision solution a restricted machine vision system, which is capable of locating and tracking a special target, was developed by the Institute of Flight Mechanics of Deutsche Forschungsanstalt fur Luft- und Raumfahrt e.V. (i.e., German Aerospace Research Establishment). This sensor, which is specialized to detect and track a square, was integrated in the fly-by-wire helicopter ATTHeS (i.e., Advanced Technology Testing Helicopter System). An existing model following controller for the forward flight condition was adapted for the hover and low speed requirements of the flight vehicle. The special target, a black square with a length of one meter, was mounted on top of a car. Flight tests demonstrated the automatic stabilization of the helicopter above the moving car by synthetic vision.

  5. Vision-based aircraft guidance

    NASA Technical Reports Server (NTRS)

    Menon, P. K.

    1993-01-01

    Early research on the development of machine vision algorithms to serve as pilot aids in aircraft flight operations is discussed. The research is useful for synthesizing new cockpit instrumentation that can enhance flight safety and efficiency. With the present work as the basis, future research will produce low-cost instrument by integrating a conventional TV camera together with off-the=shelf digitizing hardware for flight test verification. Initial focus of the research will be on developing pilot aids for clear-night operations. Latter part of the research will examine synthetic vision issues for poor visibility flight operations. Both research efforts will contribute towards the high-speed civil transport aircraft program. It is anticipated that the research reported here will also produce pilot aids for conducting helicopter flight operations during emergency search and rescue. The primary emphasis of the present research effort is on near-term, flight demonstrable technologies. This report discusses pilot aids for night landing and takeoff and synthetic vision as an aid to low visibility landing.

  6. 75 FR 60478 - In the Matter of Certain Machine Vision Software, Machine Vision Systems, and Products Containing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-30

    ... Automation, Inc. (``Amistar'') of San Marcos, California; Techno Soft Systemnics, Inc. (``Techno Soft'') of... the claim terms ``test,'' ``match score surface,'' and ``gradient direction,'' all of his infringement... complainants' proposed construction for the claim terms ``test,'' ``match score surface,'' and ``gradient...

  7. The precision measurement and assembly for miniature parts based on double machine vision systems

    NASA Astrophysics Data System (ADS)

    Wang, X. D.; Zhang, L. F.; Xin, M. Z.; Qu, Y. Q.; Luo, Y.; Ma, T. M.; Chen, L.

    2015-02-01

    In the process of miniature parts' assembly, the structural features on the bottom or side of the parts often need to be aligned and positioned. The general assembly equipment integrated with one vertical downward machine vision system cannot satisfy the requirement. A precision automatic assembly equipment was developed with double machine vision systems integrated. In the system, a horizontal vision system is employed to measure the position of the feature structure at the parts' side view, which cannot be seen with the vertical one. The position measured by horizontal camera is converted to the vertical vision system with the calibration information. By careful calibration, the parts' alignment and positioning in the assembly process can be guaranteed. The developed assembly equipment has the characteristics of easy implementation, modularization and high cost performance. The handling of the miniature parts and assembly procedure were briefly introduced. The calibration procedure was given and the assembly error was analyzed for compensation.

  8. A Practical Solution Using A New Approach To Robot Vision

    NASA Astrophysics Data System (ADS)

    Hudson, David L.

    1984-01-01

    Up to now, robot vision systems have been designed to serve both application development and operational needs in inspection, assembly and material handling. This universal approach to robot vision is too costly for many practical applications. A new industrial vision system separates the function of application program development from on-line operation. A Vision Development System (VDS) is equipped with facilities designed to simplify and accelerate the application program development process. A complimentary but lower cost Target Application System (TASK) runs the application program developed with the VDS. This concept is presented in the context of an actual robot vision application that improves inspection and assembly for a manufacturer of electronic terminal keyboards. Applications developed with a VDS experience lower development cost when compared with conventional vision systems. Since the TASK processor is not burdened with development tools, it can be installed at a lower cost than comparable "universal" vision systems that are intended to be used for both development and on-line operation. The VDS/TASK approach opens more industrial applications to robot vision that previously were not practical because of the high cost of vision systems. Although robot vision is a new technology, it has been applied successfully to a variety of industrial needs in inspection, manufacturing, and material handling. New developments in robot vision technology are creating practical, cost effective solutions for a variety of industrial needs. A year or two ago, researchers and robot manufacturers interested in implementing a robot vision application could take one of two approaches. The first approach was to purchase all the necessary vision components from various sources. That meant buying an image processor from one company, a camera from another and lens and light sources from yet others. The user then had to assemble the pieces, and in most instances he had to write all of his own software to test, analyze and process the vision application. The second and most common approach was to contract with the vision equipment vendor for the development and installation of a turnkey inspection or manufacturing system. The robot user and his company paid a premium for their vision system in an effort to assure the success of the system. Since 1981, emphasis on robotics has skyrocketed. New groups have been formed in many manufacturing companies with the charter to learn about, test and initially apply new robot and automation technologies. Machine vision is one of new technologies being tested and applied. This focused interest has created a need for a robot vision system that makes it easy for manufacturing engineers to learn about, test, and implement a robot vision application. A newly developed vision system addresses those needs. Vision Development System (VDS) is a complete hardware and software product for the development and testing of robot vision applications. A complimentary, low cost Target Application System (TASK) runs the application program developed with the VDS. An actual robot vision application that demonstrates inspection and pre-assembly for keyboard manufacturing is used to illustrate the VDS/TASK approach.

  9. Feature-Free Activity Classification of Inertial Sensor Data With Machine Vision Techniques: Method, Development, and Evaluation.

    PubMed

    Dominguez Veiga, Jose Juan; O'Reilly, Martin; Whelan, Darragh; Caulfield, Brian; Ward, Tomas E

    2017-08-04

    Inertial sensors are one of the most commonly used sources of data for human activity recognition (HAR) and exercise detection (ED) tasks. The time series produced by these sensors are generally analyzed through numerical methods. Machine learning techniques such as random forests or support vector machines are popular in this field for classification efforts, but they need to be supported through the isolation of a potentially large number of additionally crafted features derived from the raw data. This feature preprocessing step can involve nontrivial digital signal processing (DSP) techniques. However, in many cases, the researchers interested in this type of activity recognition problems do not possess the necessary technical background for this feature-set development. The study aimed to present a novel application of established machine vision methods to provide interested researchers with an easier entry path into the HAR and ED fields. This can be achieved by removing the need for deep DSP skills through the use of transfer learning. This can be done by using a pretrained convolutional neural network (CNN) developed for machine vision purposes for exercise classification effort. The new method should simply require researchers to generate plots of the signals that they would like to build classifiers with, store them as images, and then place them in folders according to their training label before retraining the network. We applied a CNN, an established machine vision technique, to the task of ED. Tensorflow, a high-level framework for machine learning, was used to facilitate infrastructure needs. Simple time series plots generated directly from accelerometer and gyroscope signals are used to retrain an openly available neural network (Inception), originally developed for machine vision tasks. Data from 82 healthy volunteers, performing 5 different exercises while wearing a lumbar-worn inertial measurement unit (IMU), was collected. The ability of the proposed method to automatically classify the exercise being completed was assessed using this dataset. For comparative purposes, classification using the same dataset was also performed using the more conventional approach of feature-extraction and classification using random forest classifiers. With the collected dataset and the proposed method, the different exercises could be recognized with a 95.89% (3827/3991) accuracy, which is competitive with current state-of-the-art techniques in ED. The high level of accuracy attained with the proposed approach indicates that the waveform morphologies in the time-series plots for each of the exercises is sufficiently distinct among the participants to allow the use of machine vision approaches. The use of high-level machine learning frameworks, coupled with the novel use of machine vision techniques instead of complex manually crafted features, may facilitate access to research in the HAR field for individuals without extensive digital signal processing or machine learning backgrounds. ©Jose Juan Dominguez Veiga, Martin O'Reilly, Darragh Whelan, Brian Caulfield, Tomas E Ward. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 04.08.2017.

  10. Feature-Free Activity Classification of Inertial Sensor Data With Machine Vision Techniques: Method, Development, and Evaluation

    PubMed Central

    O'Reilly, Martin; Whelan, Darragh; Caulfield, Brian; Ward, Tomas E

    2017-01-01

    Background Inertial sensors are one of the most commonly used sources of data for human activity recognition (HAR) and exercise detection (ED) tasks. The time series produced by these sensors are generally analyzed through numerical methods. Machine learning techniques such as random forests or support vector machines are popular in this field for classification efforts, but they need to be supported through the isolation of a potentially large number of additionally crafted features derived from the raw data. This feature preprocessing step can involve nontrivial digital signal processing (DSP) techniques. However, in many cases, the researchers interested in this type of activity recognition problems do not possess the necessary technical background for this feature-set development. Objective The study aimed to present a novel application of established machine vision methods to provide interested researchers with an easier entry path into the HAR and ED fields. This can be achieved by removing the need for deep DSP skills through the use of transfer learning. This can be done by using a pretrained convolutional neural network (CNN) developed for machine vision purposes for exercise classification effort. The new method should simply require researchers to generate plots of the signals that they would like to build classifiers with, store them as images, and then place them in folders according to their training label before retraining the network. Methods We applied a CNN, an established machine vision technique, to the task of ED. Tensorflow, a high-level framework for machine learning, was used to facilitate infrastructure needs. Simple time series plots generated directly from accelerometer and gyroscope signals are used to retrain an openly available neural network (Inception), originally developed for machine vision tasks. Data from 82 healthy volunteers, performing 5 different exercises while wearing a lumbar-worn inertial measurement unit (IMU), was collected. The ability of the proposed method to automatically classify the exercise being completed was assessed using this dataset. For comparative purposes, classification using the same dataset was also performed using the more conventional approach of feature-extraction and classification using random forest classifiers. Results With the collected dataset and the proposed method, the different exercises could be recognized with a 95.89% (3827/3991) accuracy, which is competitive with current state-of-the-art techniques in ED. Conclusions The high level of accuracy attained with the proposed approach indicates that the waveform morphologies in the time-series plots for each of the exercises is sufficiently distinct among the participants to allow the use of machine vision approaches. The use of high-level machine learning frameworks, coupled with the novel use of machine vision techniques instead of complex manually crafted features, may facilitate access to research in the HAR field for individuals without extensive digital signal processing or machine learning backgrounds. PMID:28778851

  11. Development of a model of machine hand eye coordination and program specifications for a topological machine vision system

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A unified approach to computer vision and manipulation is developed which is called choreographic vision. In the model, objects to be viewed by a projected robot in the Viking missions to Mars are seen as objects to be manipulated within choreographic contexts controlled by a multimoded remote, supervisory control system on Earth. A new theory of context relations is introduced as a basis for choreographic programming languages. A topological vision model is developed for recognizing objects by shape and contour. This model is integrated with a projected vision system consisting of a multiaperture image dissector TV camera and a ranging laser system. System program specifications integrate eye-hand coordination and topological vision functions and an aerospace multiprocessor implementation is described.

  12. Robot path planning using expert systems and machine vision

    NASA Astrophysics Data System (ADS)

    Malone, Denis E.; Friedrich, Werner E.

    1992-02-01

    This paper describes a system developed for the robotic processing of naturally variable products. In order to plan the robot motion path it was necessary to use a sensor system, in this case a machine vision system, to observe the variations occurring in workpieces and interpret this with a knowledge based expert system. The knowledge base was acquired by carrying out an in-depth study of the product using examination procedures not available in the robotic workplace and relates the nature of the required path to the information obtainable from the machine vision system. The practical application of this system to the processing of fish fillets is described and used to illustrate the techniques.

  13. Automation and robotics for Space Station in the twenty-first century

    NASA Technical Reports Server (NTRS)

    Willshire, K. F.; Pivirotto, D. L.

    1986-01-01

    Space Station telerobotics will evolve beyond the initial capability into a smarter and more capable system as we enter the twenty-first century. Current technology programs including several proposed ground and flight experiments to enable development of this system are described. Advancements in the areas of machine vision, smart sensors, advanced control architecture, manipulator joint design, end effector design, and artificial intelligence will provide increasingly more autonomous telerobotic systems.

  14. Applications of artificial neural networks (ANNs) in food science.

    PubMed

    Huang, Yiqun; Kangas, Lars J; Rasco, Barbara A

    2007-01-01

    Artificial neural networks (ANNs) have been applied in almost every aspect of food science over the past two decades, although most applications are in the development stage. ANNs are useful tools for food safety and quality analyses, which include modeling of microbial growth and from this predicting food safety, interpreting spectroscopic data, and predicting physical, chemical, functional and sensory properties of various food products during processing and distribution. ANNs hold a great deal of promise for modeling complex tasks in process control and simulation and in applications of machine perception including machine vision and electronic nose for food safety and quality control. This review discusses the basic theory of the ANN technology and its applications in food science, providing food scientists and the research community an overview of the current research and future trend of the applications of ANN technology in the field.

  15. Fast 3D NIR systems for facial measurement and lip-reading

    NASA Astrophysics Data System (ADS)

    Brahm, Anika; Ramm, Roland; Heist, Stefan; Rulff, Christian; Kühmstedt, Peter; Notni, Gunther

    2017-05-01

    Structured-light projection is a well-established optical method for the non-destructive contactless three-dimensional (3D) measurement of object surfaces. In particular, there is a great demand for accurate and fast 3D scans of human faces or facial regions of interest in medicine, safety, face modeling, games, virtual life, or entertainment. New developments of facial expression detection and machine lip-reading can be used for communication tasks, future machine control, or human-machine interactions. In such cases, 3D information may offer more detailed information than 2D images which can help to increase the power of current facial analysis algorithms. In this contribution, we present new 3D sensor technologies based on three different methods of near-infrared projection technologies in combination with a stereo vision setup of two cameras. We explain the optical principles of an NIR GOBO projector, an array projector and a modified multi-aperture projection method and compare their performance parameters to each other. Further, we show some experimental measurement results of applications where we realized fast, accurate, and irritation-free measurements of human faces.

  16. The use of holographic and diffractive optics for optimized machine vision illumination for critical dimension inspection

    NASA Astrophysics Data System (ADS)

    Lizotte, Todd E.; Ohar, Orest

    2004-02-01

    Illuminators used in machine vision applications typically produce non-uniform illumination onto the targeted surface being observed, causing a variety of problems with machine vision alignment or measurement. In most circumstances the light source is broad spectrum, leading to further problems with image quality when viewed through a CCD camera. Configured with a simple light bulb and a mirrored reflector and/or frosted glass plates, these general illuminators are appropriate for only macro applications. Over the last 5 years newer illuminators have hit the market including circular or rectangular arrays of high intensity light emitting diodes. These diode arrays are used to create monochromatic flood illumination of a surface that is to be inspected. The problem with these illumination techniques is that most of the light does not illuminate the desired areas, but broadly spreads across the surface, or when integrated with diffuser elements, tend to create similar shadowing effects to the broad spectrum light sources. In many cases a user will try to increase the performance of these illuminators by adding several of these assemblies together, increasing the intensity or by moving the illumination source closer or farther from the surface being inspected. In this case these non-uniform techniques can lead to machine vision errors, where the computer machine vision may read false information, such as interpreting non-uniform lighting or shadowing effects as defects. This paper will cover a technique involving the use of holographic / diffractive hybrid optical elements that are integrated into standard and customized light sources used in the machine vision industry. The bulk of the paper will describe the function and fabrication of the holographic/diffractive optics and how they can be tailored to improve illuminator design. Further information will be provided a specific design and examples of it in operation will be disclosed.

  17. Cameras Reveal Elements in the Short Wave Infrared

    NASA Technical Reports Server (NTRS)

    2010-01-01

    Goodrich ISR Systems Inc. (formerly Sensors Unlimited Inc.), based out of Princeton, New Jersey, received Small Business Innovation Research (SBIR) contracts from the Jet Propulsion Laboratory, Marshall Space Flight Center, Kennedy Space Center, Goddard Space Flight Center, Ames Research Center, Stennis Space Center, and Langley Research Center to assist in advancing and refining indium gallium arsenide imaging technology. Used on the Lunar Crater Observation and Sensing Satellite (LCROSS) mission in 2009 for imaging the short wave infrared wavelengths, the technology has dozens of applications in military, security and surveillance, machine vision, medical, spectroscopy, semiconductor inspection, instrumentation, thermography, and telecommunications.

  18. Machine vision for various manipulation tasks

    NASA Astrophysics Data System (ADS)

    Domae, Yukiyasu

    2017-03-01

    Bin-picking, re-grasping, pick-and-place, kitting, etc. There are many manipulation tasks in the fields of automation of factory, warehouse and so on. The main problem of the automation is that the target objects (items/parts) have various shapes, weights and surface materials. In my talk, I will show latest machine vision systems and algorithms against the problem.

  19. A Multiple Sensor Machine Vision System for Automatic Hardwood Feature Detection

    Treesearch

    D. Earl Kline; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman; Robert L. Brisbin

    1993-01-01

    A multiple sensor machine vision prototype is being developed to scan full size hardwood lumber at industrial speeds for automatically detecting features such as knots holes, wane, stain, splits, checks, and color. The prototype integrates a multiple sensor imaging system, a materials handling system, a computer system, and application software. The prototype provides...

  20. Theory research of seam recognition and welding torch pose control based on machine vision

    NASA Astrophysics Data System (ADS)

    Long, Qiang; Zhai, Peng; Liu, Miao; He, Kai; Wang, Chunyang

    2017-03-01

    At present, the automation requirement of the welding become higher, so a method of the welding information extraction by vision sensor is proposed in this paper, and the simulation with the MATLAB has been conducted. Besides, in order to improve the quality of robot automatic welding, an information retrieval method for welding torch pose control by visual sensor is attempted. Considering the demands of welding technology and engineering habits, the relative coordinate systems and variables are strictly defined, and established the mathematical model of the welding pose, and verified its feasibility by using the MATLAB simulation in the paper, these works lay a foundation for the development of welding off-line programming system with high precision and quality.

  1. Hybrid vision activities at NASA Johnson Space Center

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.

    1990-01-01

    NASA's Johnson Space Center in Houston, Texas, is active in several aspects of hybrid image processing. (The term hybrid image processing refers to a system that combines digital and photonic processing). The major thrusts are autonomous space operations such as planetary landing, servicing, and rendezvous and docking. By processing images in non-Cartesian geometries to achieve shift invariance to canonical distortions, researchers use certain aspects of the human visual system for machine vision. That technology flow is bidirectional; researchers are investigating the possible utility of video-rate coordinate transformations for human low-vision patients. Man-in-the-loop teleoperations are also supported by the use of video-rate image-coordinate transformations, as researchers plan to use bandwidth compression tailored to the varying spatial acuity of the human operator. Technological elements being developed in the program include upgraded spatial light modulators, real-time coordinate transformations in video imagery, synthetic filters that robustly allow estimation of object pose parameters, convolutionally blurred filters that have continuously selectable invariance to such image changes as magnification and rotation, and optimization of optical correlation done with spatial light modulators that have limited range and couple both phase and amplitude in their response.

  2. Responding to the Event Deluge

    NASA Technical Reports Server (NTRS)

    Williams, Roy D.; Barthelmy, Scott D.; Denny, Robert B.; Graham, Matthew J.; Swinbank, John

    2012-01-01

    We present the VOEventNet infrastructure for large-scale rapid follow-up of astronomical events, including selection, annotation, machine intelligence, and coordination of observations. The VOEvent.standard is central to this vision, with distributed and replicated services rather than centralized facilities. We also describe some of the event brokers, services, and software that .are connected to the network. These technologies will become more important in the coming years, with new event streams from Gaia, LOF AR, LIGO, LSST, and many others

  3. Clinical use and misuse of automated semen analysis.

    PubMed

    Sherins, R J

    1991-01-01

    During the past six years, there has been an explosion of technology which allows automated machine-vision for sperm analysis. CASA clearly provides an opportunity for objective, systematic assessment of sperm motion. But there are many caveats in using this type of equipment. CASA requires a disciplined and standardized approach to semen collection, specimen preparation, machine settings, calibration and avoidance of sampling bias. Potential sources of error can be minimized. Unfortunately, the rapid commercialization of this technology preceded detailed statistical analysis of such data to allow equally rapid comparisons of data between different CASA machines and among different laboratories. Thus, it is now imperative that we standardize use of this technology and obtain more detailed biological insights into sperm motion parameters in semen and after capacitation before we empirically employ CASA for studies of fertility prediction. In the basic science arena, CASA technology will likely evolve to provide new algorithms for accurate sperm motion analysis and give us an opportunity to address the biophysics of sperm movement. In the clinical arena, CASA instruments provide the opportunity to share and compare sperm motion data among laboratories by virtue of its objectivity, assuming standardized conditions of utilization. Identification of men with specific sperm motion disorders is certain, but the biological relevance of motility dysfunction to actual fertilization remains uncertain and surely the subject for further study.

  4. Receptive fields and the theory of discriminant operators

    NASA Astrophysics Data System (ADS)

    Gupta, Madan M.; Hungenahally, Suresh K.

    1991-02-01

    Biological basis for machine vision is a notion which is being used extensively for the development of machine vision systems for various applications. In this paper we have made an attempt to emulate the receptive fields that exist in the biological visual channels. In particular we have exploited the notion of receptive fields for developing the mathematical functions named as discriminantfunctions for the extraction of transition information from signals and multi-dimensional signals and images. These functions are found to be useful for the development of artificial receptive fields for neuro-vision systems. 1.

  5. Deep Learning for Computer Vision: A Brief Review

    PubMed Central

    Doulamis, Nikolaos; Doulamis, Anastasios; Protopapadakis, Eftychios

    2018-01-01

    Over the last years deep learning methods have been shown to outperform previous state-of-the-art machine learning techniques in several fields, with computer vision being one of the most prominent cases. This review paper provides a brief overview of some of the most significant deep learning schemes used in computer vision problems, that is, Convolutional Neural Networks, Deep Boltzmann Machines and Deep Belief Networks, and Stacked Denoising Autoencoders. A brief account of their history, structure, advantages, and limitations is given, followed by a description of their applications in various computer vision tasks, such as object detection, face recognition, action and activity recognition, and human pose estimation. Finally, a brief overview is given of future directions in designing deep learning schemes for computer vision problems and the challenges involved therein. PMID:29487619

  6. Progress in computer vision.

    NASA Astrophysics Data System (ADS)

    Jain, A. K.; Dorai, C.

    Computer vision has emerged as a challenging and important area of research, both as an engineering and a scientific discipline. The growing importance of computer vision is evident from the fact that it was identified as one of the "Grand Challenges" and also from its prominent role in the National Information Infrastructure. While the design of a general-purpose vision system continues to be elusive machine vision systems are being used successfully in specific application elusive, machine vision systems are being used successfully in specific application domains. Building a practical vision system requires a careful selection of appropriate sensors, extraction and integration of information from available cues in the sensed data, and evaluation of system robustness and performance. The authors discuss and demonstrate advantages of (1) multi-sensor fusion, (2) combination of features and classifiers, (3) integration of visual modules, and (IV) admissibility and goal-directed evaluation of vision algorithms. The requirements of several prominent real world applications such as biometry, document image analysis, image and video database retrieval, and automatic object model construction offer exciting problems and new opportunities to design and evaluate vision algorithms.

  7. Vision Systems with the Human in the Loop

    NASA Astrophysics Data System (ADS)

    Bauckhage, Christian; Hanheide, Marc; Wrede, Sebastian; Käster, Thomas; Pfeiffer, Michael; Sagerer, Gerhard

    2005-12-01

    The emerging cognitive vision paradigm deals with vision systems that apply machine learning and automatic reasoning in order to learn from what they perceive. Cognitive vision systems can rate the relevance and consistency of newly acquired knowledge, they can adapt to their environment and thus will exhibit high robustness. This contribution presents vision systems that aim at flexibility and robustness. One is tailored for content-based image retrieval, the others are cognitive vision systems that constitute prototypes of visual active memories which evaluate, gather, and integrate contextual knowledge for visual analysis. All three systems are designed to interact with human users. After we will have discussed adaptive content-based image retrieval and object and action recognition in an office environment, the issue of assessing cognitive systems will be raised. Experiences from psychologically evaluated human-machine interactions will be reported and the promising potential of psychologically-based usability experiments will be stressed.

  8. Detection of oranges from a color image of an orange tree

    NASA Astrophysics Data System (ADS)

    Weeks, Arthur R.; Gallagher, A.; Eriksson, J.

    1999-10-01

    The progress of robotic and machine vision technology has increased the demand for sophisticated methods for performing automatic harvesting of fruit. The harvesting of fruit, until recently, has been performed manually and is quite labor intensive. An automatic robot harvesting system that uses machine vision to locate and extract the fruit would free the agricultural industry from the ups and downs of the labor market. The environment in which robotic fruit harvesters must work presents many challenges due to the inherent variability from one location to the next. This paper takes a step towards this goal by outlining a machine vision algorithm that detects and accurately locates oranges from a color image of an orange tree. Previous work in this area has focused on differentiating the orange regions from the rest of the picture and not locating the actual oranges themselves. Failure to locate the oranges, however, leads to a reduced number of successful pick attempts. This paper presents a new approach for orange region segmentation in which the circumference of the individual oranges as well as partially occluded oranges are located. Accurately defining the circumference of each orange allows a robotic harvester to cut the stem of the orange by either scanning the top of the orange with a laser or by directing a robotic arm towards the stem to automatically cut it. A modified version of the K- means algorithm is used to initially segment the oranges from the canopy of the orange tree. Morphological processing is then used to locate occluded oranges and an iterative circle finding algorithm is used to define the circumference of the segmented oranges.

  9. Anomalous Cases of Astronaut Helmet Detection

    NASA Technical Reports Server (NTRS)

    Dolph, Chester; Moore, Andrew J.; Schubert, Matthew; Woodell, Glenn

    2015-01-01

    An astronaut's helmet is an invariant, rigid image element that is well suited for identification and tracking using current machine vision technology. Future space exploration will benefit from the development of astronaut detection software for search and rescue missions based on EVA helmet identification. However, helmets are solid white, except for metal brackets to attach accessories such as supplementary lights. We compared the performance of a widely used machine vision pipeline on a standard-issue NASA helmet with and without affixed experimental feature-rich patterns. Performance on the patterned helmet was far more robust. We found that four different feature-rich patterns are sufficient to identify a helmet and determine orientation as it is rotated about the yaw, pitch, and roll axes. During helmet rotation the field of view changes to frames containing parts of two or more feature-rich patterns. We took reference images in these locations to fill in detection gaps. These multiple feature-rich patterns references added substantial benefit to detection, however, they generated the majority of the anomalous cases. In these few instances, our algorithm keys in on one feature-rich pattern of the multiple feature-rich pattern reference and makes an incorrect prediction of the location of the other feature-rich patterns. We describe and make recommendations on ways to mitigate anomalous cases in which detection of one or more feature-rich patterns fails. While the number of cases is only a small percentage of the tested helmet orientations, they illustrate important design considerations for future spacesuits. In addition to our four successful feature-rich patterns, we present unsuccessful patterns and discuss the cause of their poor performance from a machine vision perspective. Future helmets designed with these considerations will enable automated astronaut detection and thereby enhance mission operations and extraterrestrial search and rescue.

  10. Development of a cost-effective machine vision system for in-field sorting and grading of apples: Fruit orientation and size estimation

    USDA-ARS?s Scientific Manuscript database

    The objective of this research was to develop an in-field apple presorting and grading system to separate undersized and defective fruit from fresh market-grade apples. To achieve this goal, a cost-effective machine vision inspection prototype was built, which consisted of a low-cost color camera, L...

  11. Compact VLSI neural computer integrated with active pixel sensor for real-time ATR applications

    NASA Astrophysics Data System (ADS)

    Fang, Wai-Chi; Udomkesmalee, Gabriel; Alkalai, Leon

    1997-04-01

    A compact VLSI neural computer integrated with an active pixel sensor has been under development to mimic what is inherent in biological vision systems. This electronic eye- brain computer is targeted for real-time machine vision applications which require both high-bandwidth communication and high-performance computing for data sensing, synergy of multiple types of sensory information, feature extraction, target detection, target recognition, and control functions. The neural computer is based on a composite structure which combines Annealing Cellular Neural Network (ACNN) and Hierarchical Self-Organization Neural Network (HSONN). The ACNN architecture is a programmable and scalable multi- dimensional array of annealing neurons which are locally connected with their local neurons. Meanwhile, the HSONN adopts a hierarchical structure with nonlinear basis functions. The ACNN+HSONN neural computer is effectively designed to perform programmable functions for machine vision processing in all levels with its embedded host processor. It provides a two order-of-magnitude increase in computation power over the state-of-the-art microcomputer and DSP microelectronics. A compact current-mode VLSI design feasibility of the ACNN+HSONN neural computer is demonstrated by a 3D 16X8X9-cube neural processor chip design in a 2-micrometers CMOS technology. Integration of this neural computer as one slice of a 4'X4' multichip module into the 3D MCM based avionics architecture for NASA's New Millennium Program is also described.

  12. Neo-Symbiosis: The Next Stage in the Evolution of Human Information Interaction.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Griffith, Douglas; Greitzer, Frank L.

    In his 1960 paper Man-Machine Symbiosis, Licklider predicted that human brains and computing machines will be coupled in a tight partnership that will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today. Today we are on the threshold of resurrecting the vision of symbiosis. While Licklider’s original vision suggested a co-equal relationship, here we discuss an updated vision, neo-symbiosis, in which the human holds a superordinate position in an intelligent human-computer collaborative environment. This paper was originally published as a journal article and is being publishedmore » as a chapter in an upcoming book series, Advances in Novel Approaches in Cognitive Informatics and Natural Intelligence.« less

  13. Pyramidal neurovision architecture for vision machines

    NASA Astrophysics Data System (ADS)

    Gupta, Madan M.; Knopf, George K.

    1993-08-01

    The vision system employed by an intelligent robot must be active; active in the sense that it must be capable of selectively acquiring the minimal amount of relevant information for a given task. An efficient active vision system architecture that is based loosely upon the parallel-hierarchical (pyramidal) structure of the biological visual pathway is presented in this paper. Although the computational architecture of the proposed pyramidal neuro-vision system is far less sophisticated than the architecture of the biological visual pathway, it does retain some essential features such as the converging multilayered structure of its biological counterpart. In terms of visual information processing, the neuro-vision system is constructed from a hierarchy of several interactive computational levels, whereupon each level contains one or more nonlinear parallel processors. Computationally efficient vision machines can be developed by utilizing both the parallel and serial information processing techniques within the pyramidal computing architecture. A computer simulation of a pyramidal vision system for active scene surveillance is presented.

  14. Manifold learning in machine vision and robotics

    NASA Astrophysics Data System (ADS)

    Bernstein, Alexander

    2017-02-01

    Smart algorithms are used in Machine vision and Robotics to organize or extract high-level information from the available data. Nowadays, Machine learning is an essential and ubiquitous tool to automate extraction patterns or regularities from data (images in Machine vision; camera, laser, and sonar sensors data in Robotics) in order to solve various subject-oriented tasks such as understanding and classification of images content, navigation of mobile autonomous robot in uncertain environments, robot manipulation in medical robotics and computer-assisted surgery, and other. Usually such data have high dimensionality, however, due to various dependencies between their components and constraints caused by physical reasons, all "feasible and usable data" occupy only a very small part in high dimensional "observation space" with smaller intrinsic dimensionality. Generally accepted model of such data is manifold model in accordance with which the data lie on or near an unknown manifold (surface) of lower dimensionality embedded in an ambient high dimensional observation space; real-world high-dimensional data obtained from "natural" sources meet, as a rule, this model. The use of Manifold learning technique in Machine vision and Robotics, which discovers a low-dimensional structure of high dimensional data and results in effective algorithms for solving of a large number of various subject-oriented tasks, is the content of the conference plenary speech some topics of which are in the paper.

  15. Machine Vision Applied to Navigation of Confined Spaces

    NASA Technical Reports Server (NTRS)

    Briscoe, Jeri M.; Broderick, David J.; Howard, Ricky; Corder, Eric L.

    2004-01-01

    The reliability of space related assets has been emphasized after the second loss of a Space Shuttle. The intricate nature of the hardware being inspected often requires a complete disassembly to perform a thorough inspection which can be difficult as well as costly. Furthermore, it is imperative that the hardware under inspection not be altered in any other manner than that which is intended. In these cases the use of machine vision can allow for inspection with greater frequency using less intrusive methods. Such systems can provide feedback to guide, not only manually controlled instrumentation, but autonomous robotic platforms as well. This paper serves to detail a method using machine vision to provide such sensing capabilities in a compact package. A single camera is used in conjunction with a projected reference grid to ascertain precise distance measurements. The design of the sensor focuses on the use of conventional components in an unconventional manner with the goal of providing a solution for systems that do not require or cannot accommodate more complex vision systems.

  16. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems

    PubMed Central

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-01

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems. PMID:28079187

  17. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems.

    PubMed

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-12

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.

  18. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems

    NASA Astrophysics Data System (ADS)

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-01

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.

  19. Performance of Color Camera Machine Vision in Automated Furniture Rough Mill Systems

    Treesearch

    D. Earl Kline; Agus Widoyoko; Janice K. Wiedenbeck; Philip A. Araman

    1998-01-01

    The objective of this study was to evaluate the performance of color camera machine vision for lumber processing in a furniture rough mill. The study used 134 red oak boards to compare the performance of automated gang-rip-first rough mill yield based on a prototype color camera lumber inspection system developed at Virginia Tech with both estimated optimum rough mill...

  20. Overview of machine vision methods in x-ray imaging and microtomography

    NASA Astrophysics Data System (ADS)

    Buzmakov, Alexey; Zolotov, Denis; Chukalina, Marina; Nikolaev, Dmitry; Gladkov, Andrey; Ingacheva, Anastasia; Yakimchuk, Ivan; Asadchikov, Victor

    2018-04-01

    Digital X-ray imaging became widely used in science, medicine, non-destructive testing. This allows using modern digital images analysis for automatic information extraction and interpretation. We give short review of scientific applications of machine vision in scientific X-ray imaging and microtomography, including image processing, feature detection and extraction, images compression to increase camera throughput, microtomography reconstruction, visualization and setup adjustment.

  1. Software model of a machine vision system based on the common house fly.

    PubMed

    Madsen, Robert; Barrett, Steven; Wilcox, Michael

    2005-01-01

    The vision system of the common house fly has many properties, such as hyperacuity and parallel structure, which would be advantageous in a machine vision system. A software model has been developed which is ultimately intended to be a tool to guide the design of an analog real time vision system. The model starts by laying out cartridges over an image. The cartridges are analogous to the ommatidium of the fly's eye and contain seven photoreceptors each with a Gaussian profile. The spacing between photoreceptors is variable providing for more or less detail as needed. The cartridges provide information on what type of features they see and neighboring cartridges share information to construct a feature map.

  2. Component Pin Recognition Using Algorithms Based on Machine Learning

    NASA Astrophysics Data System (ADS)

    Xiao, Yang; Hu, Hong; Liu, Ze; Xu, Jiangchang

    2018-04-01

    The purpose of machine vision for a plug-in machine is to improve the machine’s stability and accuracy, and recognition of the component pin is an important part of the vision. This paper focuses on component pin recognition using three different techniques. The first technique involves traditional image processing using the core algorithm for binary large object (BLOB) analysis. The second technique uses the histogram of oriented gradients (HOG), to experimentally compare the effect of the support vector machine (SVM) and the adaptive boosting machine (AdaBoost) learning meta-algorithm classifiers. The third technique is the use of an in-depth learning method known as convolution neural network (CNN), which involves identifying the pin by comparing a sample to its training. The main purpose of the research presented in this paper is to increase the knowledge of learning methods used in the plug-in machine industry in order to achieve better results.

  3. Machine vision based quality inspection of flat glass products

    NASA Astrophysics Data System (ADS)

    Zauner, G.; Schagerl, M.

    2014-03-01

    This application paper presents a machine vision solution for the quality inspection of flat glass products. A contact image sensor (CIS) is used to generate digital images of the glass surfaces. The presented machine vision based quality inspection at the end of the production line aims to classify five different glass defect types. The defect images are usually characterized by very little `image structure', i.e. homogeneous regions without distinct image texture. Additionally, these defect images usually consist of only a few pixels. At the same time the appearance of certain defect classes can be very diverse (e.g. water drops). We used simple state-of-the-art image features like histogram-based features (std. deviation, curtosis, skewness), geometric features (form factor/elongation, eccentricity, Hu-moments) and texture features (grey level run length matrix, co-occurrence matrix) to extract defect information. The main contribution of this work now lies in the systematic evaluation of various machine learning algorithms to identify appropriate classification approaches for this specific class of images. In this way, the following machine learning algorithms were compared: decision tree (J48), random forest, JRip rules, naive Bayes, Support Vector Machine (multi class), neural network (multilayer perceptron) and k-Nearest Neighbour. We used a representative image database of 2300 defect images and applied cross validation for evaluation purposes.

  4. Advanced integrated enhanced vision systems

    NASA Astrophysics Data System (ADS)

    Kerr, J. R.; Luk, Chiu H.; Hammerstrom, Dan; Pavel, Misha

    2003-09-01

    In anticipation of its ultimate role in transport, business and rotary wing aircraft, we clarify the role of Enhanced Vision Systems (EVS): how the output data will be utilized, appropriate architecture for total avionics integration, pilot and control interfaces, and operational utilization. Ground-map (database) correlation is critical, and we suggest that "synthetic vision" is simply a subset of the monitor/guidance interface issue. The core of integrated EVS is its sensor processor. In order to approximate optimal, Bayesian multi-sensor fusion and ground correlation functionality in real time, we are developing a neural net approach utilizing human visual pathway and self-organizing, associative-engine processing. In addition to EVS/SVS imagery, outputs will include sensor-based navigation and attitude signals as well as hazard detection. A system architecture is described, encompassing an all-weather sensor suite; advanced processing technology; intertial, GPS and other avionics inputs; and pilot and machine interfaces. Issues of total-system accuracy and integrity are addressed, as well as flight operational aspects relating to both civil certification and military applications in IMC.

  5. Research into the Architecture of CAD Based Robot Vision Systems

    DTIC Science & Technology

    1988-02-09

    Vision 󈨚 and "Automatic Generation of Recognition Features for Com- puter Vision," Mudge, Turney and Volz, published in Robotica (1987). All of the...Occluded Parts," (T.N. Mudge, J.L. Turney, and R.A. Volz), Robotica , vol. 5, 1987, pp. 117-127. 5. "Vision Algorithms for Hypercube Machines," (T.N. Mudge

  6. Fuzzy classification for strawberry diseases-infection using machine vision and soft-computing techniques

    NASA Astrophysics Data System (ADS)

    Altıparmak, Hamit; Al Shahadat, Mohamad; Kiani, Ehsan; Dimililer, Kamil

    2018-04-01

    Robotic agriculture requires smart and doable techniques to substitute the human intelligence with machine intelligence. Strawberry is one of the important Mediterranean product and its productivity enhancement requires modern and machine-based methods. Whereas a human identifies the disease infected leaves by his eye, the machine should also be capable of vision-based disease identification. The objective of this paper is to practically verify the applicability of a new computer-vision method for discrimination between the healthy and disease infected strawberry leaves which does not require neural network or time consuming trainings. The proposed method was tested under outdoor lighting condition using a regular DLSR camera without any particular lens. Since the type and infection degree of disease is approximated a human brain a fuzzy decision maker classifies the leaves over the images captured on-site having the same properties of human vision. Optimizing the fuzzy parameters for a typical strawberry production area at a summer mid-day in Cyprus produced 96% accuracy for segmented iron deficiency and 93% accuracy for segmented using a typical human instant classification approximation as the benchmark holding higher accuracy than a human eye identifier. The fuzzy-base classifier provides approximate result for decision making on the leaf status as if it is healthy or not.

  7. A discrepancy within primate spatial vision and its bearing on the definition of edge detection processes in machine vision

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J.

    1990-01-01

    The visual perception of form information is considered to be based on the functioning of simple and complex neurons in the primate striate cortex. However, a review of the physiological data on these brain cells cannot be harmonized with either the perceptual spatial frequency performance of primates or the performance which is necessary for form perception in humans. This discrepancy together with recent interest in cortical-like and perceptual-like processing in image coding and machine vision prompted a series of image processing experiments intended to provide some definition of the selection of image operators. The experiments were aimed at determining operators which could be used to detect edges in a computational manner consistent with the visual perception of structure in images. Fundamental issues were the selection of size (peak spatial frequency) and circular versus oriented operators (or some combination). In a previous study, circular difference-of-Gaussian (DOG) operators, with peak spatial frequency responses at about 11 and 33 cyc/deg were found to capture the primary structural information in images. Here larger scale circular DOG operators were explored and led to severe loss of image structure and introduced spatial dislocations (due to blur) in structure which is not consistent with visual perception. Orientation sensitive operators (akin to one class of simple cortical neurons) introduced ambiguities of edge extent regardless of the scale of the operator. For machine vision schemes which are functionally similar to natural vision form perception, two circularly symmetric very high spatial frequency channels appear to be necessary and sufficient for a wide range of natural images. Such a machine vision scheme is most similar to the physiological performance of the primate lateral geniculate nucleus rather than the striate cortex.

  8. 49 CFR 214.507 - Required safety equipment for new on-track roadway maintenance machines.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... glass, or other material with similar properties, if the machine is designed with a windshield. Each new... wipers or suitable alternatives that provide the machine operator an equivalent level of vision if...

  9. Machine vision system for measuring conifer seedling morphology

    NASA Astrophysics Data System (ADS)

    Rigney, Michael P.; Kranzler, Glenn A.

    1995-01-01

    A PC-based machine vision system providing rapid measurement of bare-root tree seedling morphological features has been designed. The system uses backlighting and a 2048-pixel line- scan camera to acquire images with transverse resolutions as high as 0.05 mm for precise measurement of stem diameter. Individual seedlings are manually loaded on a conveyor belt and inspected by the vision system in less than 0.25 seconds. Designed for quality control and morphological data acquisition by nursery personnel, the system provides a user-friendly, menu-driven graphical interface. The system automatically locates the seedling root collar and measures stem diameter, shoot height, sturdiness ratio, root mass length, projected shoot and root area, shoot-root area ratio, and percent fine roots. Sample statistics are computed for each measured feature. Measurements for each seedling may be stored for later analysis. Feature measurements may be compared with multi-class quality criteria to determine sample quality or to perform multi-class sorting. Statistical summary and classification reports may be printed to facilitate the communication of quality concerns with grading personnel. Tests were conducted at a commercial forest nursery to evaluate measurement precision. Four quality control personnel measured root collar diameter, stem height, and root mass length on each of 200 conifer seedlings. The same seedlings were inspected four times by the machine vision system. Machine stem diameter measurement precision was four times greater than that of manual measurements. Machine and manual measurements had comparable precision for shoot height and root mass length.

  10. Can Humans Fly Action Understanding with Multiple Classes of Actors

    DTIC Science & Technology

    2015-06-08

    recognition using structure from motion point clouds. In European Conference on Computer Vision, 2008. [5] R. Caruana. Multitask learning. Machine Learning...tonomous driving ? the kitti vision benchmark suite. In IEEE Conference on Computer Vision and Pattern Recognition, 2012. [12] L. Gorelick, M. Blank

  11. Roadway Marking Optics for Autonomous Vehicle Guidance and Other Machine Vision Applications

    NASA Astrophysics Data System (ADS)

    Konopka, Anthony T.

    This work determines optimal planar geometric light source and optical imager configurations and electromagnetic wavelengths for maximizing the reflected signal intensity when using machine vision technology to image roadway markings with embedded spherical glass beads. It is found through a first set of experiments that roadway marking samples exhibiting little or no bead rolling effects are uniformly reflective with respect to the azimuthal angle of observation when measured for retroreflectivity within industry standard 30-meter geometry. A second set of experiments indicate that white roadway markings exhibit higher reflectivity throughout the visible spectrum than yellow roadway markings. A roadway marking optical model capable of being used to determine optimal geometric light source and optical imager configurations for maximizing the reflected signal intensities of roadway marking targets is constructed and simulated using optical engineering software. It is found through a third set of experiments that high signal intensities can be measured when the polar angles of the light source and optical imager along a plane normal to a roadway marking are equal, with the maximum signal intensity being measured when the polar angles of both the light source and optical imager are 90°.

  12. Automatic detection system of shaft part surface defect based on machine vision

    NASA Astrophysics Data System (ADS)

    Jiang, Lixing; Sun, Kuoyuan; Zhao, Fulai; Hao, Xiangyang

    2015-05-01

    Surface physical damage detection is an important part of the shaft parts quality inspection and the traditional detecting methods are mostly human eye identification which has many disadvantages such as low efficiency, bad reliability. In order to improve the automation level of the quality detection of shaft parts and establish its relevant industry quality standard, a machine vision inspection system connected with MCU was designed to realize the surface detection of shaft parts. The system adopt the monochrome line-scan digital camera and use the dark-field and forward illumination technology to acquire images with high contrast; the images were segmented to Bi-value images through maximum between-cluster variance method after image filtering and image enhancing algorithms; then the mainly contours were extracted based on the evaluation criterion of the aspect ratio and the area; then calculate the coordinates of the centre of gravity of defects area, namely locating point coordinates; At last, location of the defects area were marked by the coding pen communicated with MCU. Experiment show that no defect was omitted and false alarm error rate was lower than 5%, which showed that the designed system met the demand of shaft part on-line real-time detection.

  13. A computer architecture for intelligent machines

    NASA Technical Reports Server (NTRS)

    Lefebvre, D. R.; Saridis, G. N.

    1992-01-01

    The theory of intelligent machines proposes a hierarchical organization for the functions of an autonomous robot based on the principle of increasing precision with decreasing intelligence. An analytic formulation of this theory using information-theoretic measures of uncertainty for each level of the intelligent machine has been developed. The authors present a computer architecture that implements the lower two levels of the intelligent machine. The architecture supports an event-driven programming paradigm that is independent of the underlying computer architecture and operating system. Execution-level controllers for motion and vision systems are briefly addressed, as well as the Petri net transducer software used to implement coordination-level functions. A case study illustrates how this computer architecture integrates real-time and higher-level control of manipulator and vision systems.

  14. Fire protection for launch facilities using machine vision fire detection

    NASA Astrophysics Data System (ADS)

    Schwartz, Douglas B.

    1993-02-01

    Fire protection of critical space assets, including launch and fueling facilities and manned flight hardware, demands automatic sensors for continuous monitoring, and in certain high-threat areas, fast-reacting automatic suppression systems. Perhaps the most essential characteristic for these fire detection and suppression systems is high reliability; in other words, fire detectors should alarm only on actual fires and not be falsely activated by extraneous sources. Existing types of fire detectors have been greatly improved in the past decade; however, fundamental limitations of their method of operation leaves open a significant possibility of false alarms and restricts their usefulness. At the Civil Engineering Laboratory at Tyndall Air Force Base in Florida, a new type of fire detector is under development which 'sees' a fire visually, like a human being, and makes a reliable decision based on known visual characteristics of flames. Hardware prototypes of the Machine Vision (MV) Fire Detection System have undergone live fire tests and demonstrated extremely high accuracy in discriminating actual fires from false alarm sources. In fact, this technology promises to virtually eliminate false activations. This detector could be used to monitor fueling facilities, launch towers, clean rooms, and other high-value and high-risk areas. Applications can extend to space station and in-flight shuttle operations as well; fiber optics and remote camera heads enable the system to see around obstructed areas and crew compartments. The capability of the technology to distinguish fires means that fire detection can be provided even during maintenance operations, such as welding.

  15. Fire protection for launch facilities using machine vision fire detection

    NASA Technical Reports Server (NTRS)

    Schwartz, Douglas B.

    1993-01-01

    Fire protection of critical space assets, including launch and fueling facilities and manned flight hardware, demands automatic sensors for continuous monitoring, and in certain high-threat areas, fast-reacting automatic suppression systems. Perhaps the most essential characteristic for these fire detection and suppression systems is high reliability; in other words, fire detectors should alarm only on actual fires and not be falsely activated by extraneous sources. Existing types of fire detectors have been greatly improved in the past decade; however, fundamental limitations of their method of operation leaves open a significant possibility of false alarms and restricts their usefulness. At the Civil Engineering Laboratory at Tyndall Air Force Base in Florida, a new type of fire detector is under development which 'sees' a fire visually, like a human being, and makes a reliable decision based on known visual characteristics of flames. Hardware prototypes of the Machine Vision (MV) Fire Detection System have undergone live fire tests and demonstrated extremely high accuracy in discriminating actual fires from false alarm sources. In fact, this technology promises to virtually eliminate false activations. This detector could be used to monitor fueling facilities, launch towers, clean rooms, and other high-value and high-risk areas. Applications can extend to space station and in-flight shuttle operations as well; fiber optics and remote camera heads enable the system to see around obstructed areas and crew compartments. The capability of the technology to distinguish fires means that fire detection can be provided even during maintenance operations, such as welding.

  16. Design of control system for optical fiber drawing machine driven by double motor

    NASA Astrophysics Data System (ADS)

    Yu, Yue Chen; Bo, Yu Ming; Wang, Jun

    2018-01-01

    Micro channel Plate (MCP) is a kind of large-area array electron multiplier with high two-dimensional spatial resolution, used as high-performance night vision intensifier. The high precision control of the fiber is the key technology of the micro channel plate manufacturing process, and it was achieved by the control of optical fiber drawing machine driven by dual-motor in this paper. First of all, utilizing STM32 chip, the servo motor drive and control circuit was designed to realize the dual motor synchronization. Secondly, neural network PID control algorithm was designed for controlling the fiber diameter fabricated in high precision; Finally, the hexagonal fiber was manufactured by this system and it shows that multifilament diameter accuracy of the fiber is +/- 1.5μm.

  17. Bi Sparsity Pursuit: A Paradigm for Robust Subspace Recovery

    DTIC Science & Technology

    2016-09-27

    16. SECURITY CLASSIFICATION OF: The success of sparse models in computer vision and machine learning is due to the fact that, high dimensional data...Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 Signal recovery, Sparse learning , Subspace modeling REPORT DOCUMENTATION PAGE 11...vision and machine learning is due to the fact that, high dimensional data is distributed in a union of low dimensional subspaces in many real-world

  18. Quantifying nonhomogeneous colors in agricultural materials. Part II: comparison of machine vision and sensory panel evaluations.

    PubMed

    Balaban, M O; Aparicio, J; Zotarelli, M; Sims, C

    2008-11-01

    The average colors of mangos and apples were measured using machine vision. A method to quantify the perception of nonhomogeneous colors by sensory panelists was developed. Three colors out of several reference colors and their perceived percentage of the total sample area were selected by untrained panelists. Differences between the average colors perceived by panelists and those from the machine vision were reported as DeltaE values (color difference error). Effects of nonhomogeneity of color, and using real samples or their images in the sensory panels on DeltaE were evaluated. In general, samples with more nonuniform colors had higher DeltaE values, suggesting that panelists had more difficulty in evaluating more nonhomogeneous colors. There was no significant difference in DeltaE values between the real fruits and their screen image, therefore images can be used to evaluate color instead of the real samples.

  19. Comparison of Artificial Immune System and Particle Swarm Optimization Techniques for Error Optimization of Machine Vision Based Tool Movements

    NASA Astrophysics Data System (ADS)

    Mahapatra, Prasant Kumar; Sethi, Spardha; Kumar, Amod

    2015-10-01

    In conventional tool positioning technique, sensors embedded in the motion stages provide the accurate tool position information. In this paper, a machine vision based system and image processing technique for motion measurement of lathe tool from two-dimensional sequential images captured using charge coupled device camera having a resolution of 250 microns has been described. An algorithm was developed to calculate the observed distance travelled by the tool from the captured images. As expected, error was observed in the value of the distance traversed by the tool calculated from these images. Optimization of errors due to machine vision system, calibration, environmental factors, etc. in lathe tool movement was carried out using two soft computing techniques, namely, artificial immune system (AIS) and particle swarm optimization (PSO). The results show better capability of AIS over PSO.

  20. Generic decoding of seen and imagined objects using hierarchical visual features.

    PubMed

    Horikawa, Tomoyasu; Kamitani, Yukiyasu

    2017-05-22

    Object recognition is a key function in both human and machine vision. While brain decoding of seen and imagined objects has been achieved, the prediction is limited to training examples. We present a decoding approach for arbitrary objects using the machine vision principle that an object category is represented by a set of features rendered invariant through hierarchical processing. We show that visual features, including those derived from a deep convolutional neural network, can be predicted from fMRI patterns, and that greater accuracy is achieved for low-/high-level features with lower-/higher-level visual areas, respectively. Predicted features are used to identify seen/imagined object categories (extending beyond decoder training) from a set of computed features for numerous object images. Furthermore, decoding of imagined objects reveals progressive recruitment of higher-to-lower visual representations. Our results demonstrate a homology between human and machine vision and its utility for brain-based information retrieval.

  1. Performance evaluation of various classifiers for color prediction of rice paddy plant leaf

    NASA Astrophysics Data System (ADS)

    Singh, Amandeep; Singh, Maninder Lal

    2016-11-01

    The food industry is one of the industries that uses machine vision for a nondestructive quality evaluation of the produce. These quality measuring systems and softwares are precalculated on the basis of various image-processing algorithms which generally use a particular type of classifier. These classifiers play a vital role in making the algorithms so intelligent that it can contribute its best while performing the said quality evaluations by translating the human perception into machine vision and hence machine learning. The crop of interest is rice, and the color of this crop indicates the health status of the plant. An enormous number of classifiers are available to solve the purpose of color prediction, but choosing the best among them is the focus of this paper. Performance of a total of 60 classifiers has been analyzed from the application point of view, and the results have been discussed. The motivation comes from the idea of providing a set of classifiers with excellent performance and implementing them on a single algorithm for the improvement of machine vision learning and, hence, associated applications.

  2. Semi-automated Digital Imaging and Processing System for Measuring Lake Ice Thickness

    NASA Astrophysics Data System (ADS)

    Singh, Preetpal

    Canada is home to thousands of freshwater lakes and rivers. Apart from being sources of infinite natural beauty, rivers and lakes are an important source of water, food and transportation. The northern hemisphere of Canada experiences extreme cold temperatures in the winter resulting in a freeze up of regional lakes and rivers. Frozen lakes and rivers tend to offer unique opportunities in terms of wildlife harvesting and winter transportation. Ice roads built on frozen rivers and lakes are vital supply lines for industrial operations in the remote north. Monitoring the ice freeze-up and break-up dates annually can help predict regional climatic changes. Lake ice impacts a variety of physical, ecological and economic processes. The construction and maintenance of a winter road can cost millions of dollars annually. A good understanding of ice mechanics is required to build and deem an ice road safe. A crucial factor in calculating load bearing capacity of ice sheets is the thickness of ice. Construction costs are mainly attributed to producing and maintaining a specific thickness and density of ice that can support different loads. Climate change is leading to warmer temperatures causing the ice to thin faster. At a certain point, a winter road may not be thick enough to support travel and transportation. There is considerable interest in monitoring winter road conditions given the high construction and maintenance costs involved. Remote sensing technologies such as Synthetic Aperture Radar have been successfully utilized to study the extent of ice covers and record freeze-up and break-up dates of ice on lakes and rivers across the north. Ice road builders often used Ultrasound equipment to measure ice thickness. However, an automated monitoring system, based on machine vision and image processing technology, which can measure ice thickness on lakes has not been thought of. Machine vision and image processing techniques have successfully been used in manufacturing to detect equipment failure and identify defective products at the assembly line. The research work in this thesis combines machine vision and image processing technology to build a digital imaging and processing system for monitoring and measuring lake ice thickness in real time. An ultra-compact USB camera is programmed to acquire and transmit high resolution imagery for processing with MATLAB Image Processing toolbox. The image acquisition and transmission process is fully automated; image analysis is semi-automated and requires limited user input. Potential design changes to the prototype and ideas on fully automating the imaging and processing procedure are presented to conclude this research work.

  3. The use of open and machine vision technologies for development of gesture recognition intelligent systems

    NASA Astrophysics Data System (ADS)

    Cherkasov, Kirill V.; Gavrilova, Irina V.; Chernova, Elena V.; Dokolin, Andrey S.

    2018-05-01

    The article is devoted to reflection of separate aspects of intellectual system gesture recognition development. The peculiarity of the system is its intellectual block which completely based on open technologies: OpenCV library and Microsoft Cognitive Toolkit (CNTK) platform. The article presents the rationale for the choice of such set of tools, as well as the functional scheme of the system and the hierarchy of its modules. Experiments have shown that the system correctly recognizes about 85% of images received from sensors. The authors assume that the improvement of the algorithmic block of the system will increase the accuracy of gesture recognition up to 95%.

  4. Software architecture for time-constrained machine vision applications

    NASA Astrophysics Data System (ADS)

    Usamentiaga, Rubén; Molleda, Julio; García, Daniel F.; Bulnes, Francisco G.

    2013-01-01

    Real-time image and video processing applications require skilled architects, and recent trends in the hardware platform make the design and implementation of these applications increasingly complex. Many frameworks and libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing applications. However, they tend to lack flexibility, because they are normally oriented toward particular types of applications, or they impose specific data processing models such as the pipeline. Other issues include large memory footprints, difficulty for reuse, and inefficient execution on multicore processors. We present a novel software architecture for time-constrained machine vision applications that addresses these issues. The architecture is divided into three layers. The platform abstraction layer provides a high-level application programming interface for the rest of the architecture. The messaging layer provides a message-passing interface based on a dynamic publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route the messages from the publishers to the subscribers interested in a particular type of message. The application layer provides a repository for reusable application modules designed for machine vision applications. These modules, which include acquisition, visualization, communication, user interface, and data processing, take advantage of the power of well-known libraries such as OpenCV, Intel IPP, or CUDA. Finally, the proposed architecture is applied to a real machine vision application: a jam detector for steel pickling lines.

  5. Welding technology transfer task/laser based weld joint tracking system for compressor girth welds

    NASA Technical Reports Server (NTRS)

    Looney, Alan

    1991-01-01

    Sensors to control and monitor welding operations are currently being developed at Marshall Space Flight Center. The laser based weld bead profiler/torch rotation sensor was modified to provide a weld joint tracking system for compressor girth welds. The tracking system features a precision laser based vision sensor, automated two-axis machine motion, and an industrial PC controller. The system benefits are elimination of weld repairs caused by joint tracking errors which reduces manufacturing costs and increases production output, simplification of tooling, and free costly manufacturing floor space.

  6. IEEE 1982. Proceedings of the international conference on cybernetics and society

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1982-01-01

    The following topics were dealt with: knowledge-based systems; risk analysis; man-machine interactions; human information processing; metaphor, analogy and problem-solving; manual control modelling; transportation systems; simulation; adaptive and learning systems; biocybernetics; cybernetics; mathematical programming; robotics; decision support systems; analysis, design and validation of models; computer vision; systems science; energy systems; environmental modelling and policy; pattern recognition; nuclear warfare; technological forecasting; artificial intelligence; the Turin shroud; optimisation; workloads. Abstracts of individual papers can be found under the relevant classification codes in this or future issues.

  7. Robot Guidance Using Machine Vision Techniques in Industrial Environments: A Comparative Review.

    PubMed

    Pérez, Luis; Rodríguez, Íñigo; Rodríguez, Nuria; Usamentiaga, Rubén; García, Daniel F

    2016-03-05

    In the factory of the future, most of the operations will be done by autonomous robots that need visual feedback to move around the working space avoiding obstacles, to work collaboratively with humans, to identify and locate the working parts, to complete the information provided by other sensors to improve their positioning accuracy, etc. Different vision techniques, such as photogrammetry, stereo vision, structured light, time of flight and laser triangulation, among others, are widely used for inspection and quality control processes in the industry and now for robot guidance. Choosing which type of vision system to use is highly dependent on the parts that need to be located or measured. Thus, in this paper a comparative review of different machine vision techniques for robot guidance is presented. This work analyzes accuracy, range and weight of the sensors, safety, processing time and environmental influences. Researchers and developers can take it as a background information for their future works.

  8. Robot Guidance Using Machine Vision Techniques in Industrial Environments: A Comparative Review

    PubMed Central

    Pérez, Luis; Rodríguez, Íñigo; Rodríguez, Nuria; Usamentiaga, Rubén; García, Daniel F.

    2016-01-01

    In the factory of the future, most of the operations will be done by autonomous robots that need visual feedback to move around the working space avoiding obstacles, to work collaboratively with humans, to identify and locate the working parts, to complete the information provided by other sensors to improve their positioning accuracy, etc. Different vision techniques, such as photogrammetry, stereo vision, structured light, time of flight and laser triangulation, among others, are widely used for inspection and quality control processes in the industry and now for robot guidance. Choosing which type of vision system to use is highly dependent on the parts that need to be located or measured. Thus, in this paper a comparative review of different machine vision techniques for robot guidance is presented. This work analyzes accuracy, range and weight of the sensors, safety, processing time and environmental influences. Researchers and developers can take it as a background information for their future works. PMID:26959030

  9. Construction of an automated fiber pigtailing machine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strand, O.T.

    1996-01-01

    At present, the high cost of optoelectronic (OE) devices is caused in part by the labor-intensive processes involved with packaging. Automating the packaging processes should result in a significant cost reduction. One of the most labor-intensive steps is aligning and attaching the fiber to the OE device, the so-called pigtailing process. Therefore, the goal of this 2-year ARPA-funded project is to design and build 3 low-cost machines to perform sub-micron alignments and attachments of single-mode fibers to different OE devices. These Automated Fiber Pigtailing Machines (AFPMS) are intended to be compatible with a manufacturing environment and have a modular designmore » for standardization of parts and machine vision for maximum flexibility. This work is a collaboration among Uniphase Telecommunications Products (formerly United Technologies Photonics, UTP), Ortel, Newport/Klinger, the Massachusetts Institute of Technology Manufacturing Institute (MIT), and Lawrence Livermore National Laboratory (LLNL). UTP and Ortel are the industrial partners for whom two of the AFPMs are being built. MIT and LLNL make up the design and assembly team of the project, while Newport/Klinger is a potential manufacturer of the AFPM and provides guidance to ensure that the design of the AFPM is marketable and compatible with a manufacturing environment. The AFPM for UTP will pigtail LiNbO{sub 3} waveguide devices and the AFPM for Ortel will pigtail photodiodes. Both of these machines will contain proprietary information, so the third AFPM, to reside at LLNL, will pigtail a non-proprietary waveguide device for demonstrations to US industry.« less

  10. Multispectral Image Processing for Plants

    NASA Technical Reports Server (NTRS)

    Miles, Gaines E.

    1991-01-01

    The development of a machine vision system to monitor plant growth and health is one of three essential steps towards establishing an intelligent system capable of accurately assessing the state of a controlled ecological life support system for long-term space travel. Besides a network of sensors, simulators are needed to predict plant features, and artificial intelligence algorithms are needed to determine the state of a plant based life support system. Multispectral machine vision and image processing can be used to sense plant features, including health and nutritional status.

  11. A neurite quality index and machine vision software for improved quantification of neurodegeneration.

    PubMed

    Romero, Peggy; Miller, Ted; Garakani, Arman

    2009-12-01

    Current methods to assess neurodegradation in dorsal root ganglion cultures as a model for neurodegenerative diseases are imprecise and time-consuming. Here we describe two new methods to quantify neuroprotection in these cultures. The neurite quality index (NQI) builds upon earlier manual methods, incorporating additional morphological events to increase detection sensitivity for the detection of early degeneration events. Neurosight is a machine vision-based method that recapitulates many of the strengths of NQI while enabling high-throughput screening applications with decreased costs.

  12. Development of machine-vision system for gap inspection of muskmelon grafted seedlings.

    PubMed

    Liu, Siyao; Xing, Zuochang; Wang, Zifan; Tian, Subo; Jahun, Falalu Rabiu

    2017-01-01

    Grafting robots have been developed in the world, but some auxiliary works such as gap-inspecting for grafted seedlings still need to be done by human. An machine-vision system of gap inspection for grafted muskmelon seedlings was developed in this study. The image acquiring system consists of a CCD camera, a lens and a front white lighting source. The image of inspected gap was processed and analyzed by software of HALCON 12.0. The recognition algorithm for the system is based on principle of deformable template matching. A template should be created from an image of qualified grafted seedling gap. Then the gap image of the grafted seedling will be compared with the created template to determine their matching degree. Based on the similarity between the gap image of grafted seedling and the template, the matching degree will be 0 to 1. The less similar for the grafted seedling gap with the template the smaller of matching degree. Thirdly, the gap will be output as qualified or unqualified. If the matching degree of grafted seedling gap and the template is less than 0.58, or there is no match is found, the gap will be judged as unqualified; otherwise the gap will be qualified. Finally, 100 muskmelon seedlings were grafted and inspected to test the gap inspection system. Results showed that the gap inspection machine-vision system could recognize the gap qualification correctly as 98% of human vision. And the inspection speed of this system can reach 15 seedlings·min-1. The gap inspection process in grafting can be fully automated with this developed machine-vision system, and the gap inspection system will be a key step of a fully-automatic grafting robots.

  13. Biologically based machine vision: signal analysis of monopolar cells in the visual system of Musca domestica.

    PubMed

    Newton, Jenny; Barrett, Steven F; Wilcox, Michael J; Popp, Stephanie

    2002-01-01

    Machine vision for navigational purposes is a rapidly growing field. Many abilities such as object recognition and target tracking rely on vision. Autonomous vehicles must be able to navigate in dynamic enviroments and simultaneously locate a target position. Traditional machine vision often fails to react in real time because of large computational requirements whereas the fly achieves complex orientation and navigation with a relatively small and simple brain. Understanding how the fly extracts visual information and how neurons encode and process information could lead us to a new approach for machine vision applications. Photoreceptors in the Musca domestica eye that share the same spatial information converge into a structure called the cartridge. The cartridge consists of the photoreceptor axon terminals and monopolar cells L1, L2, and L4. It is thought that L1 and L2 cells encode edge related information relative to a single cartridge. These cells are thought to be equivalent to vertebrate bipolar cells, producing contrast enhancement and reduction of information sent to L4. Monopolar cell L4 is thought to perform image segmentation on the information input from L1 and L2 and also enhance edge detection. A mesh of interconnected L4's would correlate the output from L1 and L2 cells of adjacent cartridges and provide a parallel network for segmenting an object's edges. The focus of this research is to excite photoreceptors of the common housefly, Musca domestica, with different visual patterns. The electrical response of monopolar cells L1, L2, and L4 will be recorded using intracellular recording techniques. Signal analysis will determine the neurocircuitry to detect and segment images.

  14. Automatic Quality Inspection of Percussion Cap Mass Production by Means of 3D Machine Vision and Machine Learning Techniques

    NASA Astrophysics Data System (ADS)

    Tellaeche, A.; Arana, R.; Ibarguren, A.; Martínez-Otzeta, J. M.

    The exhaustive quality control is becoming very important in the world's globalized market. One of these examples where quality control becomes critical is the percussion cap mass production. These elements must achieve a minimum tolerance deviation in their fabrication. This paper outlines a machine vision development using a 3D camera for the inspection of the whole production of percussion caps. This system presents multiple problems, such as metallic reflections in the percussion caps, high speed movement of the system and mechanical errors and irregularities in percussion cap placement. Due to these problems, it is impossible to solve the problem by traditional image processing methods, and hence, machine learning algorithms have been tested to provide a feasible classification of the possible errors present in the percussion caps.

  15. Machine vision inspection of lace using a neural network

    NASA Astrophysics Data System (ADS)

    Sanby, Christopher; Norton-Wayne, Leonard

    1995-03-01

    Lace is particularly difficult to inspect using machine vision since it comprises a fine and complex pattern of threads which must be verified, on line and in real time. Small distortions in the pattern are unavoidable. This paper describes instrumentation for inspecting lace actually on the knitting machine. A CCD linescan camera synchronized to machine motions grabs an image of the lace. Differences between this lace image and a perfect prototype image are detected by comparison methods, thresholding techniques, and finally, a neural network (to distinguish real defects from false alarms). Though produced originally in a laboratory on SUN Sparc work-stations, the processing has subsequently been implemented on a 50 Mhz 486 PC-look-alike. Successful operation has been demonstrated in a factory, but over a restricted width. Full width coverage awaits provision of faster processing.

  16. Spatial vision processes: From the optical image to the symbolic structures of contour information

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J.

    1988-01-01

    The significance of machine and natural vision is discussed together with the need for a general approach to image acquisition and processing aimed at recognition. An exploratory scheme is proposed which encompasses the definition of spatial primitives, intrinsic image properties and sampling, 2-D edge detection at the smallest scale, the construction of spatial primitives from edges, and the isolation of contour information from textural information. Concepts drawn from or suggested by natural vision at both perceptual and physiological levels are relied upon heavily to guide the development of the overall scheme. The scheme is intended to provide a larger context in which to place the emerging technology of detector array focal-plane processors. The approach differs from many recent efforts in edge detection and image coding by emphasizing smallest scale edge detection as a foundation for multi-scale symbolic processing while diminishing somewhat the importance of image convolutions with multi-scale edge operators. Cursory treatments of information theory illustrate that the direct application of this theory to structural information in images could not be realized.

  17. Experiences Using an Open Source Software Library to Teach Computer Vision Subjects

    ERIC Educational Resources Information Center

    Cazorla, Miguel; Viejo, Diego

    2015-01-01

    Machine vision is an important subject in computer science and engineering degrees. For laboratory experimentation, it is desirable to have a complete and easy-to-use tool. In this work we present a Java library, oriented to teaching computer vision. We have designed and built the library from the scratch with emphasis on readability and…

  18. Feature recognition and detection for ancient architecture based on machine vision

    NASA Astrophysics Data System (ADS)

    Zou, Zheng; Wang, Niannian; Zhao, Peng; Zhao, Xuefeng

    2018-03-01

    Ancient architecture has a very high historical and artistic value. The ancient buildings have a wide variety of textures and decorative paintings, which contain a lot of historical meaning. Therefore, the research and statistics work of these different compositional and decorative features play an important role in the subsequent research. However, until recently, the statistics of those components are mainly by artificial method, which consumes a lot of labor and time, inefficiently. At present, as the strong support of big data and GPU accelerated training, machine vision with deep learning as the core has been rapidly developed and widely used in many fields. This paper proposes an idea to recognize and detect the textures, decorations and other features of ancient building based on machine vision. First, classify a large number of surface textures images of ancient building components manually as a set of samples. Then, using the convolution neural network to train the samples in order to get a classification detector. Finally verify its precision.

  19. An Automated Classification Technique for Detecting Defects in Battery Cells

    NASA Technical Reports Server (NTRS)

    McDowell, Mark; Gray, Elizabeth

    2006-01-01

    Battery cell defect classification is primarily done manually by a human conducting a visual inspection to determine if the battery cell is acceptable for a particular use or device. Human visual inspection is a time consuming task when compared to an inspection process conducted by a machine vision system. Human inspection is also subject to human error and fatigue over time. We present a machine vision technique that can be used to automatically identify defective sections of battery cells via a morphological feature-based classifier using an adaptive two-dimensional fast Fourier transformation technique. The initial area of interest is automatically classified as either an anode or cathode cell view as well as classified as an acceptable or a defective battery cell. Each battery cell is labeled and cataloged for comparison and analysis. The result is the implementation of an automated machine vision technique that provides a highly repeatable and reproducible method of identifying and quantifying defects in battery cells.

  20. Humanlike robots: the upcoming revolution in robotics

    NASA Astrophysics Data System (ADS)

    Bar-Cohen, Yoseph

    2009-08-01

    Humans have always sought to imitate the human appearance, functions and intelligence. Human-like robots, which for many years have been a science fiction, are increasingly becoming an engineering reality resulting from the many advances in biologically inspired technologies. These biomimetic technologies include artificial intelligence, artificial vision and hearing as well as artificial muscles, also known as electroactive polymers (EAP). Robots, such as the vacuum cleaner Rumba and the robotic lawnmower, that don't have human shape, are already finding growing use in homes worldwide. As opposed to other human-made machines and devices, this technology raises also various questions and concerns and they need to be addressed as the technology advances. These include the need to prevent accidents, deliberate harm, or their use in crime. In this paper the state-of-the-art of the ultimate goal of biomimetics, the development of humanlike robots, the potentials and the challenges are reviewed.

  1. Humanlike Robots - The Upcoming Revolution in Robotics

    NASA Technical Reports Server (NTRS)

    Bar-Cohen, Yoseph

    2009-01-01

    Humans have always sought to imitate the human appearance, functions and intelligence. Human-like robots, which for many years have been a science fiction, are increasingly becoming an engineering reality resulting from the many advances in biologically inspired technologies. These biomimetic technologies include artificial intelligence, artificial vision and hearing as well as artificial muscles, also known as electroactive polymers (EAP). Robots, such as the vacuum cleaner Rumba and the robotic lawnmower, that don't have human shape, are already finding growing use in homes worldwide. As opposed to other human-made machines and devices, this technology raises also various questions and concerns and they need to be addressed as the technology advances. These include the need to prevent accidents, deliberate harm, or their use in crime. In this paper the state-of-the-art of the ultimate goal of biomimetics, the development of humanlike robots, the potentials and the challenges are reviewed.

  2. Migrating EO/IR sensors to cloud-based infrastructure as service architectures

    NASA Astrophysics Data System (ADS)

    Berglie, Stephen T.; Webster, Steven; May, Christopher M.

    2014-06-01

    The Night Vision Image Generator (NVIG), a product of US Army RDECOM CERDEC NVESD, is a visualization tool used widely throughout Army simulation environments to provide fully attributed synthesized, full motion video using physics-based sensor and environmental effects. The NVIG relies heavily on contemporary hardware-based acceleration and GPU processing techniques, which push the envelope of both enterprise and commodity-level hypervisor support for providing virtual machines with direct access to hardware resources. The NVIG has successfully been integrated into fully virtual environments where system architectures leverage cloudbased technologies to various extents in order to streamline infrastructure and service management. This paper details the challenges presented to engineers seeking to migrate GPU-bound processes, such as the NVIG, to virtual machines and, ultimately, Cloud-Based IAS architectures. In addition, it presents the path that led to success for the NVIG. A brief overview of Cloud-Based infrastructure management tool sets is provided, and several virtual desktop solutions are outlined. A discrimination is made between general purpose virtual desktop technologies compared to technologies that expose GPU-specific capabilities, including direct rendering and hard ware-based video encoding. Candidate hypervisor/virtual machine configurations that nominally satisfy the virtualized hardware-level GPU requirements of the NVIG are presented , and each is subsequently reviewed in light of its implications on higher-level Cloud management techniques. Implementation details are included from the hardware level, through the operating system, to the 3D graphics APls required by the NVIG and similar GPU-bound tools.

  3. A noninvasive technique for real-time detection of bruises in apple surface based on machine vision

    NASA Astrophysics Data System (ADS)

    Zhao, Juan; Peng, Yankun; Dhakal, Sagar; Zhang, Leilei; Sasao, Akira

    2013-05-01

    Apple is one of the highly consumed fruit item in daily life. However, due to its high damage potential and massive influence on taste and export, the quality of apple has to be detected before it reaches the consumer's hand. This study was aimed to develop a hardware and software unit for real-time detection of apple bruises based on machine vision technology. The hardware unit consisted of a light shield installed two monochrome cameras at different angles, LED light source to illuminate the sample, and sensors at the entrance of box to signal the positioning of sample. Graphical Users Interface (GUI) was developed in VS2010 platform to control the overall hardware and display the image processing result. The hardware-software system was developed to acquire the images of 3 samples from each camera and display the image processing result in real time basis. An image processing algorithm was developed in Opencv and C++ platform. The software is able to control the hardware system to classify the apple into two grades based on presence/absence of surface bruises with the size of 5mm. The experimental result is promising and the system with further modification can be applicable for industrial production in near future.

  4. An active role for machine learning in drug development

    PubMed Central

    Murphy, Robert F.

    2014-01-01

    Due to the complexity of biological systems, cutting-edge machine-learning methods will be critical for future drug development. In particular, machine-vision methods to extract detailed information from imaging assays and active-learning methods to guide experimentation will be required to overcome the dimensionality problem in drug development. PMID:21587249

  5. A Survey on Ambient Intelligence in Health Care

    PubMed Central

    Acampora, Giovanni; Cook, Diane J.; Rashidi, Parisa; Vasilakos, Athanasios V.

    2013-01-01

    Ambient Intelligence (AmI) is a new paradigm in information technology aimed at empowering people’s capabilities by the means of digital environments that are sensitive, adaptive, and responsive to human needs, habits, gestures, and emotions. This futuristic vision of daily environment will enable innovative human-machine interactions characterized by pervasive, unobtrusive and anticipatory communications. Such innovative interaction paradigms make ambient intelligence technology a suitable candidate for developing various real life solutions, including in the health care domain. This survey will discuss the emergence of ambient intelligence (AmI) techniques in the health care domain, in order to provide the research community with the necessary background. We will examine the infrastructure and technology required for achieving the vision of ambient intelligence, such as smart environments and wearable medical devices. We will summarize of the state of the art artificial intelligence methodologies used for developing AmI system in the health care domain, including various learning techniques (for learning from user interaction), reasoning techniques (for reasoning about users’ goals and intensions) and planning techniques (for planning activities and interactions). We will also discuss how AmI technology might support people affected by various physical or mental disabilities or chronic disease. Finally, we will point to some of the successful case studies in the area and we will look at the current and future challenges to draw upon the possible future research paths. PMID:24431472

  6. A Survey on Ambient Intelligence in Health Care.

    PubMed

    Acampora, Giovanni; Cook, Diane J; Rashidi, Parisa; Vasilakos, Athanasios V

    2013-12-01

    Ambient Intelligence (AmI) is a new paradigm in information technology aimed at empowering people's capabilities by the means of digital environments that are sensitive, adaptive, and responsive to human needs, habits, gestures, and emotions. This futuristic vision of daily environment will enable innovative human-machine interactions characterized by pervasive, unobtrusive and anticipatory communications. Such innovative interaction paradigms make ambient intelligence technology a suitable candidate for developing various real life solutions, including in the health care domain. This survey will discuss the emergence of ambient intelligence (AmI) techniques in the health care domain, in order to provide the research community with the necessary background. We will examine the infrastructure and technology required for achieving the vision of ambient intelligence, such as smart environments and wearable medical devices. We will summarize of the state of the art artificial intelligence methodologies used for developing AmI system in the health care domain, including various learning techniques (for learning from user interaction), reasoning techniques (for reasoning about users' goals and intensions) and planning techniques (for planning activities and interactions). We will also discuss how AmI technology might support people affected by various physical or mental disabilities or chronic disease. Finally, we will point to some of the successful case studies in the area and we will look at the current and future challenges to draw upon the possible future research paths.

  7. Smartphones as image processing systems for prosthetic vision.

    PubMed

    Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Suaning, Gregg J

    2013-01-01

    The feasibility of implants for prosthetic vision has been demonstrated by research and commercial organizations. In most devices, an essential forerunner to the internal stimulation circuit is an external electronics solution for capturing, processing and relaying image information as well as extracting useful features from the scene surrounding the patient. The capabilities and multitude of image processing algorithms that can be performed by the device in real-time plays a major part in the final quality of the prosthetic vision. It is therefore optimal to use powerful hardware yet to avoid bulky, straining solutions. Recent publications have reported of portable single-board computers fast enough for computationally intensive image processing. Following the rapid evolution of commercial, ultra-portable ARM (Advanced RISC machine) mobile devices, the authors investigated the feasibility of modern smartphones running complex face detection as external processing devices for vision implants. The role of dedicated graphics processors in speeding up computation was evaluated while performing a demanding noise reduction algorithm (image denoising). The time required for face detection was found to decrease by 95% from 2.5 year old to recent devices. In denoising, graphics acceleration played a major role, speeding up denoising by a factor of 18. These results demonstrate that the technology has matured sufficiently to be considered as a valid external electronics platform for visual prosthetic research.

  8. Machine Learning Techniques in Clinical Vision Sciences.

    PubMed

    Caixinha, Miguel; Nunes, Sandrina

    2017-01-01

    This review presents and discusses the contribution of machine learning techniques for diagnosis and disease monitoring in the context of clinical vision science. Many ocular diseases leading to blindness can be halted or delayed when detected and treated at its earliest stages. With the recent developments in diagnostic devices, imaging and genomics, new sources of data for early disease detection and patients' management are now available. Machine learning techniques emerged in the biomedical sciences as clinical decision-support techniques to improve sensitivity and specificity of disease detection and monitoring, increasing objectively the clinical decision-making process. This manuscript presents a review in multimodal ocular disease diagnosis and monitoring based on machine learning approaches. In the first section, the technical issues related to the different machine learning approaches will be present. Machine learning techniques are used to automatically recognize complex patterns in a given dataset. These techniques allows creating homogeneous groups (unsupervised learning), or creating a classifier predicting group membership of new cases (supervised learning), when a group label is available for each case. To ensure a good performance of the machine learning techniques in a given dataset, all possible sources of bias should be removed or minimized. For that, the representativeness of the input dataset for the true population should be confirmed, the noise should be removed, the missing data should be treated and the data dimensionally (i.e., the number of parameters/features and the number of cases in the dataset) should be adjusted. The application of machine learning techniques in ocular disease diagnosis and monitoring will be presented and discussed in the second section of this manuscript. To show the clinical benefits of machine learning in clinical vision sciences, several examples will be presented in glaucoma, age-related macular degeneration, and diabetic retinopathy, these ocular pathologies being the major causes of irreversible visual impairment.

  9. A computer architecture for intelligent machines

    NASA Technical Reports Server (NTRS)

    Lefebvre, D. R.; Saridis, G. N.

    1991-01-01

    The Theory of Intelligent Machines proposes a hierarchical organization for the functions of an autonomous robot based on the Principle of Increasing Precision With Decreasing Intelligence. An analytic formulation of this theory using information-theoretic measures of uncertainty for each level of the intelligent machine has been developed in recent years. A computer architecture that implements the lower two levels of the intelligent machine is presented. The architecture supports an event-driven programming paradigm that is independent of the underlying computer architecture and operating system. Details of Execution Level controllers for motion and vision systems are addressed, as well as the Petri net transducer software used to implement Coordination Level functions. Extensions to UNIX and VxWorks operating systems which enable the development of a heterogeneous, distributed application are described. A case study illustrates how this computer architecture integrates real-time and higher-level control of manipulator and vision systems.

  10. Auto-SEIA: simultaneous optimization of image processing and machine learning algorithms

    NASA Astrophysics Data System (ADS)

    Negro Maggio, Valentina; Iocchi, Luca

    2015-02-01

    Object classification from images is an important task for machine vision and it is a crucial ingredient for many computer vision applications, ranging from security and surveillance to marketing. Image based object classification techniques properly integrate image processing and machine learning (i.e., classification) procedures. In this paper we present a system for automatic simultaneous optimization of algorithms and parameters for object classification from images. More specifically, the proposed system is able to process a dataset of labelled images and to return a best configuration of image processing and classification algorithms and of their parameters with respect to the accuracy of classification. Experiments with real public datasets are used to demonstrate the effectiveness of the developed system.

  11. The Intangible Assets Advantages in the Machine Vision Inspection of Thermoplastic Materials

    NASA Astrophysics Data System (ADS)

    Muntean, Diana; Răulea, Andreea Simina

    2017-12-01

    Innovation is not a simple concept but is the main source of success. It is more important to have the right people and mindsets in place than to have a perfectly crafted plan in order to make the most out of an idea or business. The aim of this paper is to emphasize the importance of intangible assets when it comes to machine vision inspection of thermoplastic materials pointing out some aspects related to knowledge based assets and their need for a success idea to be developed in a successful product.

  12. Spectral imaging spreads into new industrial and on-field applications

    NASA Astrophysics Data System (ADS)

    Bouyé, Clémentine; Robin, Thierry; d'Humières, Benoît

    2018-02-01

    Numerous recent innovative developments have led to a high reduction of hyperspectral and multispectral cameras cost and size. The achieved products - compact, reliable, low-cot, easy-to-use - meet end-user requirements in major fields: agriculture, food and beverages, pharmaceutics, machine vision, health. The booming of this technology in industrial and on-field applications is getting closer. Indeed, the Spectral Imaging market is at a turning point. A high growth rate of 20% is expected in the next 5 years. The number of cameras sold will increase from 3 600 in 2017 to more than 9 000 in 2022.

  13. A Graphical Operator Interface for a Telerobotic Inspection System

    NASA Technical Reports Server (NTRS)

    Kim, W. S.; Tso, K. S.; Hayati, S.

    1993-01-01

    Operator interface has recently emerged as an important element for efficient and safe operatorinteractions with the telerobotic system. Recent advances in graphical user interface (GUI) andgraphics/video merging technologies enable development of more efficient, flexible operatorinterfaces. This paper describes an advanced graphical operator interface newly developed for aremote surface inspection system at Jet Propulsion Laboratory. The interface has been designed sothat remote surface inspection can be performed by a single operator with an integrated robot controland image inspection capability. It supports three inspection strategies of teleoperated human visual inspection, human visual inspection with automated scanning, and machine-vision-based automated inspection.

  14. An operator interface design for a telerobotic inspection system

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Tso, Kam S.; Hayati, Samad

    1993-01-01

    The operator interface has recently emerged as an important element for efficient and safe interactions between human operators and telerobotics. Advances in graphical user interface and graphics technologies enable us to produce very efficient operator interface designs. This paper describes an efficient graphical operator interface design newly developed for remote surface inspection at NASA-JPL. The interface, designed so that remote surface inspection can be performed by a single operator with an integrated robot control and image inspection capability, supports three inspection strategies of teleoperated human visual inspection, human visual inspection with automated scanning, and machine-vision-based automated inspection.

  15. Machine vision system: a tool for quality inspection of food and agricultural products.

    PubMed

    Patel, Krishna Kumar; Kar, A; Jha, S N; Khan, M A

    2012-04-01

    Quality inspection of food and agricultural produce are difficult and labor intensive. Simultaneously, with increased expectations for food products of high quality and safety standards, the need for accurate, fast and objective quality determination of these characteristics in food products continues to grow. However, these operations generally in India are manual which is costly as well as unreliable because human decision in identifying quality factors such as appearance, flavor, nutrient, texture, etc., is inconsistent, subjective and slow. Machine vision provides one alternative for an automated, non-destructive and cost-effective technique to accomplish these requirements. This inspection approach based on image analysis and processing has found a variety of different applications in the food industry. Considerable research has highlighted its potential for the inspection and grading of fruits and vegetables, grain quality and characteristic examination and quality evaluation of other food products like bakery products, pizza, cheese, and noodles etc. The objective of this paper is to provide in depth introduction of machine vision system, its components and recent work reported on food and agricultural produce.

  16. Design of apochromatic lens with large field and high definition for machine vision.

    PubMed

    Yang, Ao; Gao, Xingyu; Li, Mingfeng

    2016-08-01

    Precise machine vision detection for a large object at a finite working distance (WD) requires that the lens has a high resolution for a large field of view (FOV). In this case, the effect of a secondary spectrum on image quality is not negligible. According to the detection requirements, a high resolution apochromatic objective is designed and analyzed. The initial optical structure (IOS) is combined with three segments. Next, the secondary spectrum of the IOS is corrected by replacing glasses using the dispersion vector analysis method based on the Buchdahl dispersion equation. Other aberrations are optimized by the commercial optical design software ZEMAX by properly choosing the optimization function operands. The optimized optical structure (OOS) has an f-number (F/#) of 3.08, a FOV of φ60  mm, a WD of 240 mm, and a modulated transfer function (MTF) of all fields of more than 0.1 at 320  cycles/mm. The design requirements for a nonfluorite material apochromatic objective lens with a large field and high definition for machine vision detection have been achieved.

  17. Broiler weight estimation based on machine vision and artificial neural network.

    PubMed

    Amraei, S; Abdanan Mehdizadeh, S; Salari, S

    2017-04-01

    1. Machine vision and artificial neural network (ANN) procedures were used to estimate live body weight of broiler chickens in 30 1-d-old broiler chickens reared for 42 d. 2. Imaging was performed two times daily. To localise chickens within the pen, an ellipse fitting algorithm was used and the chickens' head and tail removed using the Chan-Vese method. 3. The correlations between the body weight and 6 physical extracted features indicated that there were strong correlations between body weight and the 5 features including area, perimeter, convex area, major and minor axis length. 5. According to statistical analysis there was no significant difference between morning and afternoon data over 42 d. 6. In an attempt to improve the accuracy of live weight approximation different ANN techniques, including Bayesian regulation, Levenberg-Marquardt, Scaled conjugate gradient and gradient descent were used. Bayesian regulation with R 2 value of 0.98 was the best network for prediction of broiler weight. 7. The accuracy of the machine vision technique was examined and most errors were less than 50 g.

  18. Study About Ceiling Design for Main Control Room of NPP with HFE

    NASA Astrophysics Data System (ADS)

    Gu, Pengfei; Ni, Ying; Chen, Weihua; Chen, Bo; Zhang, Jianbo; Liang, Huihui

    Recently since human factor engineering (HFE) has been used in control room design of nuclear power plant (NPP), the human-machine interface (HMI) has been gradual to develop harmoniously, especially the use of the digital technology. Comparing with the analog technology which was used to human-machine interface in the past, human-machine interaction has been more enhanced. HFE and the main control room (MCR) design engineering of NPP is a combination of multidisciplinary cross, mainly related to electrical and instrument control, reactor, machinery, systems engineering and management disciplines. However, MCR is not only equipped with HMI provided by the equipments, but also more important for the operator to provide a work environment, such as the main control room ceiling. The ceiling design of main control room related to HFE which influences the performance of staff should also be considered in the design of the environment and aesthetic factors, especially the introduction of professional design experience and evaluation method. Based on Ling Ao phase II and Hong Yanhe project implementation experience, the study analyzes lighting effect, space partition, vision load about the ceiling of main control room of NPP. Combining with the requirements of standards, the advantages and disadvantages of the main control room ceiling design has been discussed, and considering the requirements of lightweight, noise reduction, fire prevention, moisture protection, the ceiling design solution of the main control room also has been discussed.

  19. Industry's tireless eyes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1987-08-01

    This article reports that there are literally hundreds of machine vision systems from which to choose. They range in cost from $10,000 to $1,000,000. Most have been designed for specific applications; the same systems if used for a different application may fail dismally. How can you avoid wasting money on inferior, useless, or nonexpandable systems. A good reference is the Automated Vision Association in Ann Arbor, Mich., a trade group comprised of North American machine vision manufacturers. Reputable suppliers caution users to do their homework before making an investment. Important considerations include comprehensive details on the objects to be viewed-thatmore » is, quantity, shape, dimension, size, and configuration details; lighting characteristics and variations; component orientation details. Then, what do you expect the system to do-inspect, locate components, aid in robotic vision. Other criteria include system speed and related accuracy and reliability. What are the projected benefits and system paybacks.. Examine primarily paybacks associated with scrap and rework reduction as well as reduced warranty costs.« less

  20. Trends in communicative access solutions for children with cerebral palsy.

    PubMed

    Myrden, Andrew; Schudlo, Larissa; Weyand, Sabine; Zeyl, Timothy; Chau, Tom

    2014-08-01

    Access solutions may facilitate communication in children with limited functional speech and motor control. This study reviews current trends in access solution development for children with cerebral palsy, with particular emphasis on the access technology that harnesses a control signal from the user (eg, movement or physiological change) and the output device (eg, augmentative and alternative communication system) whose behavior is modulated by the user's control signal. Access technologies have advanced from simple mechanical switches to machine vision (eg, eye-gaze trackers), inertial sensing, and emerging physiological interfaces that require minimal physical effort. Similarly, output devices have evolved from bulky, dedicated hardware with limited configurability, to platform-agnostic, highly personalized mobile applications. Emerging case studies encourage the consideration of access technology for all nonverbal children with cerebral palsy with at least nascent contingency awareness. However, establishing robust evidence of the effectiveness of the aforementioned advances will require more expansive studies. © The Author(s) 2014.

  1. High-Speed Videography Overview

    NASA Astrophysics Data System (ADS)

    Miller, C. E.

    1989-02-01

    The field of high-speed videography (HSV) has continued to mature in recent years, due to the introduction of a mixture of new technology and extensions of existing technology. Recent low frame-rate innovations have the potential to dramatically expand the areas of information gathering and motion analysis at all frame-rates. Progress at the 0 - rate is bringing the battle of film versus video to the field of still photography. The pressure to push intermediate frame rates higher continues, although the maximum achievable frame rate has remained stable for several years. Higher maximum recording rates appear technologically practical, but economic factors impose severe limitations to development. The application of diverse photographic techniques to video-based systems is under-exploited. The basics of HSV apply to other fields, such as machine vision and robotics. Present motion analysis systems continue to function mainly as an instant replay replacement for high-speed movie film cameras. The interrelationship among lighting, shuttering and spatial resolution is examined.

  2. Testing meta tagger

    DTIC Science & Technology

    2017-12-21

    rank , and computer vision. Machine learning is closely related to (and often overlaps with) computational statistics, which also focuses on...Machine learning is a field of computer science that gives computers the ability to learn without being explicitly programmed.[1] Arthur Samuel...an American pioneer in the field of computer gaming and artificial intelligence, coined the term "Machine Learning " in 1959 while at IBM[2]. Evolved

  3. Thermographic Sensing For On-Line Industrial Control

    NASA Astrophysics Data System (ADS)

    Holmsten, Dag

    1986-10-01

    It is today's emergence of thermoelectrically cooled, highly accurate infrared linescanners and imaging systems that has definitely made on-line Infraread Thermography (IRT) possible. Specifically designed for continuous use, these scanners are equipped with dedicated software capable of monitoring and controlling highly complex thermodynamic situations. This paper will outline some possible implications of using IRT on-line by describing some uses of this technology in the steel-making (hot rolling) and automotive industries (machine-vision). A warning is also expressed that IRT technology not originally designed for automated applications e.g. high resolution, imaging systems, should not be directly applied to an on-line measurement situation without having its measurement resolution, accuracy and especially its repeatability, reliably proven. Some suitable testing procedures are briefly outlined at the end of the paper.

  4. Generating Contextual Descriptions of Virtual Reality (VR) Spaces

    NASA Astrophysics Data System (ADS)

    Olson, D. M.; Zaman, C. H.; Sutherland, A.

    2017-12-01

    Virtual reality holds great potential for science communication, education, and research. However, interfaces for manipulating data and environments in virtual worlds are limited and idiosyncratic. Furthermore, speech and vision are the primary modalities by which humans collect information about the world, but the linking of visual and natural language domains is a relatively new pursuit in computer vision. Machine learning techniques have been shown to be effective at image and speech classification, as well as at describing images with language (Karpathy 2016), but have not yet been used to describe potential actions. We propose a technique for creating a library of possible context-specific actions associated with 3D objects in immersive virtual worlds based on a novel dataset generated natively in virtual reality containing speech, image, gaze, and acceleration data. We will discuss the design and execution of a user study in virtual reality that enabled the collection and the development of this dataset. We will also discuss the development of a hybrid machine learning algorithm linking vision data with environmental affordances in natural language. Our findings demonstrate that it is possible to develop a model which can generate interpretable verbal descriptions of possible actions associated with recognized 3D objects within immersive VR environments. This suggests promising applications for more intuitive user interfaces through voice interaction within 3D environments. It also demonstrates the potential to apply vast bodies of embodied and semantic knowledge to enrich user interaction within VR environments. This technology would allow for applications such as expert knowledge annotation of 3D environments, complex verbal data querying and object manipulation in virtual spaces, and computer-generated, dynamic 3D object affordances and functionality during simulations.

  5. Machine vision system for automated detection of stained pistachio nuts

    NASA Astrophysics Data System (ADS)

    Pearson, Tom C.

    1995-01-01

    A machine vision system was developed to separate stained pistachio nuts, which comprise of about 5% of the California crop, from unstained nuts. The system may be used to reduce labor involved with manual grading or to remove aflatoxin contaminated product from low grade process streams. The system was tested on two different pistachio process streams: the bi- chromatic color sorter reject stream and the small nut shelling stock stream. The system had a minimum overall error rate of 14% for the bi-chromatic sorter reject stream and 15% for the small shelling stock stream.

  6. Artificial Intelligence.

    ERIC Educational Resources Information Center

    Wash, Darrel Patrick

    1989-01-01

    Making a machine seem intelligent is not easy. As a consequence, demand has been rising for computer professionals skilled in artificial intelligence and is likely to continue to go up. These workers develop expert systems and solve the mysteries of machine vision, natural language processing, and neural networks. (Editor)

  7. Computer Vision and Machine Learning for Autonomous Characterization of AM Powder Feedstocks

    NASA Astrophysics Data System (ADS)

    DeCost, Brian L.; Jain, Harshvardhan; Rollett, Anthony D.; Holm, Elizabeth A.

    2017-03-01

    By applying computer vision and machine learning methods, we develop a system to characterize powder feedstock materials for metal additive manufacturing (AM). Feature detection and description algorithms are applied to create a microstructural scale image representation that can be used to cluster, compare, and analyze powder micrographs. When applied to eight commercial feedstock powders, the system classifies powder images into the correct material systems with greater than 95% accuracy. The system also identifies both representative and atypical powder images. These results suggest the possibility of measuring variations in powders as a function of processing history, relating microstructural features of powders to properties relevant to their performance in AM processes, and defining objective material standards based on visual images. A significant advantage of the computer vision approach is that it is autonomous, objective, and repeatable.

  8. Optical character recognition reading aid for the visually impaired.

    PubMed

    Grandin, Juan Carlos; Cremaschi, Fabian; Lombardo, Elva; Vitu, Ed; Dujovny, Manuel

    2008-06-01

    An optical character recognition (OCR) reading machine is a significant help for visually impaired patients. An OCR reading machine is used. This instrument can provide a significant help in order to improve the quality of life of patients with low vision or blindness.

  9. Integrity Determination for Image Rendering Vision Navigation

    DTIC Science & Technology

    2016-03-01

    identifying an object within a scene, tracking a SIFT feature between frames or matching images and/or features for stereo vision applications. This... object level, either in 2-D or 3-D, versus individual features. There is a breadth of information, largely from the machine vision community...matching or image rendering image correspondence approach is based upon using either 2-D or 3-D object models or templates to perform object detection or

  10. An ontological knowledge framework for adaptive medical workflow.

    PubMed

    Dang, Jiangbo; Hedayati, Amir; Hampel, Ken; Toklu, Candemir

    2008-10-01

    As emerging technologies, semantic Web and SOA (Service-Oriented Architecture) allow BPMS (Business Process Management System) to automate business processes that can be described as services, which in turn can be used to wrap existing enterprise applications. BPMS provides tools and methodologies to compose Web services that can be executed as business processes and monitored by BPM (Business Process Management) consoles. Ontologies are a formal declarative knowledge representation model. It provides a foundation upon which machine understandable knowledge can be obtained, and as a result, it makes machine intelligence possible. Healthcare systems can adopt these technologies to make them ubiquitous, adaptive, and intelligent, and then serve patients better. This paper presents an ontological knowledge framework that covers healthcare domains that a hospital encompasses-from the medical or administrative tasks, to hospital assets, medical insurances, patient records, drugs, and regulations. Therefore, our ontology makes our vision of personalized healthcare possible by capturing all necessary knowledge for a complex personalized healthcare scenario involving patient care, insurance policies, and drug prescriptions, and compliances. For example, our ontology facilitates a workflow management system to allow users, from physicians to administrative assistants, to manage, even create context-aware new medical workflows and execute them on-the-fly.

  11. Bioethics and Transhumanism.

    PubMed

    Porter, Allen

    2017-06-01

    Transhumanism is a "technoprogressive" socio-political and intellectual movement that advocates for the use of technology in order to transform the human organism radically, with the ultimate goal of becoming "posthuman." To this end, transhumanists focus on and encourage the use of new and emerging technologies, such as genetic engineering and brain-machine interfaces. In support of their vision for humanity, and as a way of reassuring those "bioconservatives" who may balk at the radical nature of that vision, transhumanists claim common ground with a number of esteemed thinkers and traditions, from the ancient philosophy of Plato and Aristotle to the postmodern philosophy of Nietzsche. It is crucially important to give proper scholarly attention to transhumanism now, not only because of its recent and ongoing rise as a cultural and political force (and the concomitant potential ramifications for bioethical discourse and public policy), but because of the imminence of major breakthroughs in the kinds of technologies that transhumanism focuses on. Thus, the articles in this issue of The Journal of Medicine and Philosophy are either explicitly about transhumanism or are on topics, such as the ethics of germline engineering and criteria for personhood, that are directly relevant to the debate between transhumanists (and technoprogressives more broadly) and bioconservatives. © The Author 2017. Published by Oxford University Press, on behalf of the Journal of Medicine and Philosophy Inc. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  12. A Low-Power High-Speed Smart Sensor Design for Space Exploration Missions

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi

    1997-01-01

    A low-power high-speed smart sensor system based on a large format active pixel sensor (APS) integrated with a programmable neural processor for space exploration missions is presented. The concept of building an advanced smart sensing system is demonstrated by a system-level microchip design that is composed with an APS sensor, a programmable neural processor, and an embedded microprocessor in a SOI CMOS technology. This ultra-fast smart sensor system-on-a-chip design mimics what is inherent in biological vision systems. Moreover, it is programmable and capable of performing ultra-fast machine vision processing in all levels such as image acquisition, image fusion, image analysis, scene interpretation, and control functions. The system provides about one tera-operation-per-second computing power which is a two order-of-magnitude increase over that of state-of-the-art microcomputers. Its high performance is due to massively parallel computing structures, high data throughput rates, fast learning capabilities, and advanced VLSI system-on-a-chip implementation.

  13. Real-time image processing of TOF range images using a reconfigurable processor system

    NASA Astrophysics Data System (ADS)

    Hussmann, S.; Knoll, F.; Edeler, T.

    2011-07-01

    During the last years, Time-of-Flight sensors achieved a significant impact onto research fields in machine vision. In comparison to stereo vision system and laser range scanners they combine the advantages of active sensors providing accurate distance measurements and camera-based systems recording a 2D matrix at a high frame rate. Moreover low cost 3D imaging has the potential to open a wide field of additional applications and solutions in markets like consumer electronics, multimedia, digital photography, robotics and medical technologies. This paper focuses on the currently implemented 4-phase-shift algorithm in this type of sensors. The most time critical operation of the phase-shift algorithm is the arctangent function. In this paper a novel hardware implementation of the arctangent function using a reconfigurable processor system is presented and benchmarked against the state-of-the-art CORDIC arctangent algorithm. Experimental results show that the proposed algorithm is well suited for real-time processing of the range images of TOF cameras.

  14. Vision-Based People Detection System for Heavy Machine Applications

    PubMed Central

    Fremont, Vincent; Bui, Manh Tuan; Boukerroui, Djamal; Letort, Pierrick

    2016-01-01

    This paper presents a vision-based people detection system for improving safety in heavy machines. We propose a perception system composed of a monocular fisheye camera and a LiDAR. Fisheye cameras have the advantage of a wide field-of-view, but the strong distortions that they create must be handled at the detection stage. Since people detection in fisheye images has not been well studied, we focus on investigating and quantifying the impact that strong radial distortions have on the appearance of people, and we propose approaches for handling this specificity, adapted from state-of-the-art people detection approaches. These adaptive approaches nevertheless have the drawback of high computational cost and complexity. Consequently, we also present a framework for harnessing the LiDAR modality in order to enhance the detection algorithm for different camera positions. A sequential LiDAR-based fusion architecture is used, which addresses directly the problem of reducing false detections and computational cost in an exclusively vision-based system. A heavy machine dataset was built, and different experiments were carried out to evaluate the performance of the system. The results are promising, in terms of both processing speed and performance. PMID:26805838

  15. Vision-Based People Detection System for Heavy Machine Applications.

    PubMed

    Fremont, Vincent; Bui, Manh Tuan; Boukerroui, Djamal; Letort, Pierrick

    2016-01-20

    This paper presents a vision-based people detection system for improving safety in heavy machines. We propose a perception system composed of a monocular fisheye camera and a LiDAR. Fisheye cameras have the advantage of a wide field-of-view, but the strong distortions that they create must be handled at the detection stage. Since people detection in fisheye images has not been well studied, we focus on investigating and quantifying the impact that strong radial distortions have on the appearance of people, and we propose approaches for handling this specificity, adapted from state-of-the-art people detection approaches. These adaptive approaches nevertheless have the drawback of high computational cost and complexity. Consequently, we also present a framework for harnessing the LiDAR modality in order to enhance the detection algorithm for different camera positions. A sequential LiDAR-based fusion architecture is used, which addresses directly the problem of reducing false detections and computational cost in an exclusively vision-based system. A heavy machine dataset was built, and different experiments were carried out to evaluate the performance of the system. The results are promising, in terms of both processing speed and performance.

  16. Developing a machine vision system for simultaneous prediction of freshness indicators based on tilapia (Oreochromis niloticus) pupil and gill color during storage at 4°C.

    PubMed

    Shi, Ce; Qian, Jianping; Han, Shuai; Fan, Beilei; Yang, Xinting; Wu, Xiaoming

    2018-03-15

    The study assessed the feasibility of developing a machine vision system based on pupil and gill color changes in tilapia for simultaneous prediction of total volatile basic nitrogen (TVB-N), thiobarbituric acid (TBA) and total viable counts (TVC) during storage at 4°C. The pupils and gills were chosen and color space conversion among RGB, HSI and L ∗ a ∗ b ∗ color spaces was performed automatically by an image processing algorithm. Multiple regression models were established by correlating pupil and gill color parameters with TVB-N, TVC and TBA (R 2 =0.989-0.999). However, assessment of freshness based on gill color is destructive and time-consuming because gill cover must be removed before images are captured. Finally, visualization maps of spoilage based on pupil color were achieved using image algorithms. The results show that assessment of tilapia pupil color parameters using machine vision can be used as a low-cost, on-line method for predicting freshness during 4°C storage. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Automatic decoding of facial movements reveals deceptive pain expressions

    PubMed Central

    Bartlett, Marian Stewart; Littlewort, Gwen C.; Frank, Mark G.; Lee, Kang

    2014-01-01

    Summary In highly social species such as humans, faces have evolved to convey rich information for social interaction, including expressions of emotions and pain [1–3]. Two motor pathways control facial movement [4–7]. A subcortical extrapyramidal motor system drives spontaneous facial expressions of felt emotions. A cortical pyramidal motor system controls voluntary facial expressions. The pyramidal system enables humans to simulate facial expressions of emotions not actually experienced. Their simulation is so successful that they can deceive most observers [8–11]. Machine vision may, however, be able to distinguish deceptive from genuine facial signals by identifying the subtle differences between pyramidally and extrapyramidally driven movements. Here we show that human observers could not discriminate real from faked expressions of pain better than chance, and after training, improved accuracy to a modest 55%. However a computer vision system that automatically measures facial movements and performs pattern recognition on those movements attained 85% accuracy. The machine system’s superiority is attributable to its ability to differentiate the dynamics of genuine from faked expressions. Thus by revealing the dynamics of facial action through machine vision systems, our approach has the potential to elucidate behavioral fingerprints of neural control systems involved in emotional signaling. PMID:24656830

  18. Gradual Reduction in Sodium Content in Cooked Ham, with Corresponding Change in Sensorial Properties Measured by Sensory Evaluation and a Multimodal Machine Vision System

    PubMed Central

    Greiff, Kirsti; Mathiassen, John Reidar; Misimi, Ekrem; Hersleth, Margrethe; Aursand, Ida G.

    2015-01-01

    The European diet today generally contains too much sodium (Na+). A partial substitution of NaCl by KCl has shown to be a promising method for reducing sodium content. The aim of this work was to investigate the sensorial changes of cooked ham with reduced sodium content. Traditional sensorial evaluation and objective multimodal machine vision were used. The salt content in the hams was decreased from 3.4% to 1.4%, and 25% of the Na+ was replaced by K+. The salt reduction had highest influence on the sensory attributes salty taste, after taste, tenderness, hardness and color hue. The multimodal machine vision system showed changes in lightness, as a function of reduced salt content. Compared to the reference ham (3.4% salt), a replacement of Na+-ions by K+-ions of 25% gave no significant changes in WHC, moisture, pH, expressed moisture, the sensory profile attributes or the surface lightness and shininess. A further reduction of salt down to 1.7–1.4% salt, led to a decrease in WHC and an increase in expressible moisture. PMID:26422367

  19. Gradual Reduction in Sodium Content in Cooked Ham, with Corresponding Change in Sensorial Properties Measured by Sensory Evaluation and a Multimodal Machine Vision System.

    PubMed

    Greiff, Kirsti; Mathiassen, John Reidar; Misimi, Ekrem; Hersleth, Margrethe; Aursand, Ida G

    2015-01-01

    The European diet today generally contains too much sodium (Na(+)). A partial substitution of NaCl by KCl has shown to be a promising method for reducing sodium content. The aim of this work was to investigate the sensorial changes of cooked ham with reduced sodium content. Traditional sensorial evaluation and objective multimodal machine vision were used. The salt content in the hams was decreased from 3.4% to 1.4%, and 25% of the Na(+) was replaced by K(+). The salt reduction had highest influence on the sensory attributes salty taste, after taste, tenderness, hardness and color hue. The multimodal machine vision system showed changes in lightness, as a function of reduced salt content. Compared to the reference ham (3.4% salt), a replacement of Na(+)-ions by K(+)-ions of 25% gave no significant changes in WHC, moisture, pH, expressed moisture, the sensory profile attributes or the surface lightness and shininess. A further reduction of salt down to 1.7-1.4% salt, led to a decrease in WHC and an increase in expressible moisture.

  20. Triadic split-merge sampler

    NASA Astrophysics Data System (ADS)

    van Rossum, Anne C.; Lin, Hai Xiang; Dubbeldam, Johan; van der Herik, H. Jaap

    2018-04-01

    In machine vision typical heuristic methods to extract parameterized objects out of raw data points are the Hough transform and RANSAC. Bayesian models carry the promise to optimally extract such parameterized objects given a correct definition of the model and the type of noise at hand. A category of solvers for Bayesian models are Markov chain Monte Carlo methods. Naive implementations of MCMC methods suffer from slow convergence in machine vision due to the complexity of the parameter space. Towards this blocked Gibbs and split-merge samplers have been developed that assign multiple data points to clusters at once. In this paper we introduce a new split-merge sampler, the triadic split-merge sampler, that perform steps between two and three randomly chosen clusters. This has two advantages. First, it reduces the asymmetry between the split and merge steps. Second, it is able to propose a new cluster that is composed out of data points from two different clusters. Both advantages speed up convergence which we demonstrate on a line extraction problem. We show that the triadic split-merge sampler outperforms the conventional split-merge sampler. Although this new MCMC sampler is demonstrated in this machine vision context, its application extend to the very general domain of statistical inference.

  1. Machine Vision Within The Framework Of Collective Neural Assemblies

    NASA Astrophysics Data System (ADS)

    Gupta, Madan M.; Knopf, George K.

    1990-03-01

    The proposed mechanism for designing a robust machine vision system is based on the dynamic activity generated by the various neural populations embedded in nervous tissue. It is postulated that a hierarchy of anatomically distinct tissue regions are involved in visual sensory information processing. Each region may be represented as a planar sheet of densely interconnected neural circuits. Spatially localized aggregates of these circuits represent collective neural assemblies. Four dynamically coupled neural populations are assumed to exist within each assembly. In this paper we present a state-variable model for a tissue sheet derived from empirical studies of population dynamics. Each population is modelled as a nonlinear second-order system. It is possible to emulate certain observed physiological and psychophysiological phenomena of biological vision by properly programming the interconnective gains . Important early visual phenomena such as temporal and spatial noise insensitivity, contrast sensitivity and edge enhancement will be discussed for a one-dimensional tissue model.

  2. A robust embedded vision system feasible white balance algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Yuan; Yu, Feihong

    2018-01-01

    White balance is a very important part of the color image processing pipeline. In order to meet the need of efficiency and accuracy in embedded machine vision processing system, an efficient and robust white balance algorithm combining several classical ones is proposed. The proposed algorithm mainly has three parts. Firstly, in order to guarantee higher efficiency, an initial parameter calculated from the statistics of R, G and B components from raw data is used to initialize the following iterative method. After that, the bilinear interpolation algorithm is utilized to implement demosaicing procedure. Finally, an adaptive step adjustable scheme is introduced to ensure the controllability and robustness of the algorithm. In order to verify the proposed algorithm's performance on embedded vision system, a smart camera based on IMX6 DualLite, IMX291 and XC6130 is designed. Extensive experiments on a large amount of images under different color temperatures and exposure conditions illustrate that the proposed white balance algorithm avoids color deviation problem effectively, achieves a good balance between efficiency and quality, and is suitable for embedded machine vision processing system.

  3. Feature-based three-dimensional registration for repetitive geometry in machine vision

    PubMed Central

    Gong, Yuanzheng; Seibel, Eric J.

    2016-01-01

    As an important step in three-dimensional (3D) machine vision, 3D registration is a process of aligning two or multiple 3D point clouds that are collected from different perspectives together into a complete one. The most popular approach to register point clouds is to minimize the difference between these point clouds iteratively by Iterative Closest Point (ICP) algorithm. However, ICP does not work well for repetitive geometries. To solve this problem, a feature-based 3D registration algorithm is proposed to align the point clouds that are generated by vision-based 3D reconstruction. By utilizing texture information of the object and the robustness of image features, 3D correspondences can be retrieved so that the 3D registration of two point clouds is to solve a rigid transformation. The comparison of our method and different ICP algorithms demonstrates that our proposed algorithm is more accurate, efficient and robust for repetitive geometry registration. Moreover, this method can also be used to solve high depth uncertainty problem caused by little camera baseline in vision-based 3D reconstruction. PMID:28286703

  4. Landmark navigation and autonomous landing approach with obstacle detection for aircraft

    NASA Astrophysics Data System (ADS)

    Fuerst, Simon; Werner, Stefan; Dickmanns, Dirk; Dickmanns, Ernst D.

    1997-06-01

    A machine perception system for aircraft and helicopters using multiple sensor data for state estimation is presented. By combining conventional aircraft sensor like gyros, accelerometers, artificial horizon, aerodynamic measuring devices and GPS with vision data taken by conventional CCD-cameras mounted on a pan and tilt platform, the position of the craft can be determined as well as the relative position to runways and natural landmarks. The vision data of natural landmarks are used to improve position estimates during autonomous missions. A built-in landmark management module decides which landmark should be focused on by the vision system, depending on the distance to the landmark and the aspect conditions. More complex landmarks like runways are modeled with different levels of detail that are activated dependent on range. A supervisor process compares vision data and GPS data to detect mistracking of the vision system e.g. due to poor visibility and tries to reinitialize the vision system or to set focus on another landmark available. During landing approach obstacles like trucks and airplanes can be detected on the runway. The system has been tested in real-time within a hardware-in-the-loop simulation. Simulated aircraft measurements corrupted by noise and other characteristic sensor errors have been fed into the machine perception system; the image processing module for relative state estimation was driven by computer generated imagery. Results from real-time simulation runs are given.

  5. Effectiveness of Assistive Technologies for Low Vision Rehabilitation: A Systematic Review

    ERIC Educational Resources Information Center

    Jutai, Jeffrey W.; Strong, J. Graham; Russell-Minda, Elizabeth

    2009-01-01

    "Low vision" describes any condition of diminished vision that is uncorrectable by standard eyeglasses, contact lenses, medication, or surgery that disrupts a person's ability to perform common age-appropriate visual tasks. Examples of assistive technologies for vision rehabilitation include handheld magnifiers; electronic vision-enhancement…

  6. Some examples of image warping for low vision prosthesis

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.; Loshin, David S.

    1988-01-01

    NASA has developed an image processor, the Programmable Remapper, for certain functions in machine vision. The Remapper performs a highly arbitrary geometric warping of an image at video rate. It might ultimately be shrunk to a size and cost that could allow its use in a low-vision prosthesis. Coordinate warpings have been developed for retinitis pigmentosa (tunnel vision) and for maculapathy (loss of central field) that are intended to make best use of the patient's remaining viable retina. The rationales and mathematics are presented for some warpings that we will try in clinical studies using the Remapper's prototype.

  7. Novel technologies for monitoring the in-line quality of virgin olive oil during manufacturing and storage.

    PubMed

    Beltrán Ortega, Julio; Martínez Gila, Diego M; Aguilera Puerto, Daniel; Gámez García, Javier; Gómez Ortega, Juan

    2016-11-01

    The quality of virgin olive oil is related to the agronomic conditions of the olive fruits and the process variables of the production process. Nowadays, food markets demand better products in terms of safety, health and organoleptic properties with competitive prices. Innovative techniques for process control, inspection and classification have been developed in order to to achieve these requirements. This paper presents a review of the most significant sensing technologies which are increasingly used in the olive oil industry to supervise and control the virgin olive oil production process. Throughout the present work, the main research studies in the literature that employ non-invasive technologies such as infrared spectroscopy, computer vision, machine olfaction technology, electronic tongues and dielectric spectroscopy are analysed and their main results and conclusions are presented. These technologies are used on olive fruit, olive slurry and olive oil to determine parameters such as acidity, peroxide indexes, ripening indexes, organoleptic properties and minor components, among others. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  8. Multispectral image analysis for object recognition and classification

    NASA Astrophysics Data System (ADS)

    Viau, C. R.; Payeur, P.; Cretu, A.-M.

    2016-05-01

    Computer and machine vision applications are used in numerous fields to analyze static and dynamic imagery in order to assist or automate decision-making processes. Advancements in sensor technologies now make it possible to capture and visualize imagery at various wavelengths (or bands) of the electromagnetic spectrum. Multispectral imaging has countless applications in various fields including (but not limited to) security, defense, space, medical, manufacturing and archeology. The development of advanced algorithms to process and extract salient information from the imagery is a critical component of the overall system performance. The fundamental objective of this research project was to investigate the benefits of combining imagery from the visual and thermal bands of the electromagnetic spectrum to improve the recognition rates and accuracy of commonly found objects in an office setting. A multispectral dataset (visual and thermal) was captured and features from the visual and thermal images were extracted and used to train support vector machine (SVM) classifiers. The SVM's class prediction ability was evaluated separately on the visual, thermal and multispectral testing datasets.

  9. A low-cost color vision system for automatic estimation of apple fruit orientation and maximum equatorial diameter

    USDA-ARS?s Scientific Manuscript database

    The overall objective of this research was to develop an in-field presorting and grading system to separate undersized and defective fruit from fresh market-grade apples. To achieve this goal, a cost-effective machine vision inspection prototype was built, which consisted of a low-cost color camera,...

  10. Controllable liquid colour-changing lenses with microfluidic channels for vision protection, camouflage and optical filtering based on soft lithography fabrication.

    PubMed

    Zhang, Min; Li, Songjing

    2016-01-01

    In this work, liquid colour-changing lenses for vision protection, camouflage and optical filtering are developed by circulating colour liquids through microfluidic channels on the lenses manually. Soft lithography technology is applied to fabricate the silicone liquid colour-changing layers with microfluidic channels on the lenses instead of mechanical machining. To increase the hardness and abrasion resistance of the silicone colour-changing layers on the lenses, proper fabrication parameters such as 6:1 (mass ration) mixing proportion and 100 °C curing temperature for 2 h are approved for better soft lithography process of the lenses. Meanwhile, a new surface treatment for the irreversible bonding of silicone colour-changing layer with optical resin (CR39) substrate lens by using 5 % (volume ratio) 3-Aminopropyltriethoxysilane solution is proposed. Vision protection, camouflage and optical filtering functions of the lenses are investigated with different designs of the channels and multi-layer structures. Each application can not only well achieve their functional demands, but also shows the advantages of functional flexibility, rapid prototyping and good controllability compared with traditional ways. Besides optometry, some other designs and applications of the lenses are proposed for potential utility in the future.

  11. Learning surface molecular structures via machine vision

    NASA Astrophysics Data System (ADS)

    Ziatdinov, Maxim; Maksov, Artem; Kalinin, Sergei V.

    2017-08-01

    Recent advances in high resolution scanning transmission electron and scanning probe microscopies have allowed researchers to perform measurements of materials structural parameters and functional properties in real space with a picometre precision. In many technologically relevant atomic and/or molecular systems, however, the information of interest is distributed spatially in a non-uniform manner and may have a complex multi-dimensional nature. One of the critical issues, therefore, lies in being able to accurately identify (`read out') all the individual building blocks in different atomic/molecular architectures, as well as more complex patterns that these blocks may form, on a scale of hundreds and thousands of individual atomic/molecular units. Here we employ machine vision to read and recognize complex molecular assemblies on surfaces. Specifically, we combine Markov random field model and convolutional neural networks to classify structural and rotational states of all individual building blocks in molecular assembly on the metallic surface visualized in high-resolution scanning tunneling microscopy measurements. We show how the obtained full decoding of the system allows us to directly construct a pair density function—a centerpiece in analysis of disorder-property relationship paradigm—as well as to analyze spatial correlations between multiple order parameters at the nanoscale, and elucidate reaction pathway involving molecular conformation changes. The method represents a significant shift in our way of analyzing atomic and/or molecular resolved microscopic images and can be applied to variety of other microscopic measurements of structural, electronic, and magnetic orders in different condensed matter systems.

  12. Evaluation of body weight of sea cucumber Apostichopus japonicus by computer vision

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Xu, Qiang; Liu, Shilin; Zhang, Libin; Yang, Hongsheng

    2015-01-01

    A postichopus japonicus (Holothuroidea, Echinodermata) is an ecological and economic species in East Asia. Conventional biometric monitoring method includes diving for samples and weighing above water, with highly variable in weight measurement due to variation in the quantity of water in the respiratory tree and intestinal content of this species. Recently, video survey method has been applied widely in biometric detection on underwater benthos. However, because of the high flexibility of A. japonicus body, video survey method of monitoring is less used in sea cucumber. In this study, we designed a model to evaluate the wet weight of A. japonicus, using machine vision technology combined with a support vector machine (SVM) that can be used in field surveys on the A. japonicus population. Continuous dorsal images of free-moving A. japonicus individuals in seawater were captured, which also allows for the development of images of the core body edge as well as thorn segmentation. Parameters that include body length, body breadth, perimeter and area, were extracted from the core body edge images and used in SVM regression, to predict the weight of A. japonicus and for comparison with a power model. Results indicate that the use of SVM for predicting the weight of 33 A. japonicus individuals is accurate ( R 2=0.99) and compatible with the power model ( R 2 =0.96). The image-based analysis and size-weight regression models in this study may be useful in body weight evaluation of A. japonicus in lab and field study.

  13. Infrared machine vision system for the automatic detection of olive fruit quality.

    PubMed

    Guzmán, Elena; Baeten, Vincent; Pierna, Juan Antonio Fernández; García-Mesa, José A

    2013-11-15

    External quality is an important factor in the extraction of olive oil and the marketing of olive fruits. The appearance and presence of external damage are factors that influence the quality of the oil extracted and the perception of consumers, determining the level of acceptance prior to purchase in the case of table olives. The aim of this paper is to report on artificial vision techniques developed for the online estimation of olive quality and to assess the effectiveness of these techniques in evaluating quality based on detecting external defects. This method of classifying olives according to the presence of defects is based on an infrared (IR) vision system. Images of defects were acquired using a digital monochrome camera with band-pass filters on near-infrared (NIR). The original images were processed using segmentation algorithms, edge detection and pixel value intensity to classify the whole fruit. The detection of the defect involved a pixel classification procedure based on nonparametric models of the healthy and defective areas of olives. Classification tests were performed on olives to assess the effectiveness of the proposed method. This research showed that the IR vision system is a useful technology for the automatic assessment of olives that has the potential for use in offline inspection and for online sorting for defects and the presence of surface damage, easily distinguishing those that do not meet minimum quality requirements. Crown Copyright © 2013 Published by Elsevier B.V. All rights reserved.

  14. Visualization of anthropometric measures of workers in computer 3D modeling of work place.

    PubMed

    Mijović, B; Ujević, D; Baksa, S

    2001-12-01

    In this work, 3D visualization of a work place by means of a computer-made 3D-machine model and computer animation of a worker have been performed. By visualization of 3D characters in inverse kinematic and dynamic relation with the operating part of a machine, the biomechanic characteristics of worker's body have been determined. The dimensions of a machine have been determined by an inspection of technical documentation as well as by direct measurements and recordings of the machine by camera. On the basis of measured body height of workers all relevant anthropometric measures have been determined by a computer program developed by the authors. By knowing the anthropometric measures, the vision fields and the scope zones while forming work places, exact postures of workers while performing technological procedures were determined. The minimal and maximal rotation angles and the translation of upper and lower arm which are basis for the analysis of worker burdening were analyzed. The dimensions of the seized space of a body are obtained by computer anthropometric analysis of movement, e.g. range of arms, position of legs, head, back. The influence of forming of a work place on correct postures of workers during work has been reconsidered and thus the consumption of energy and fatigue can be reduced to a minimum.

  15. Robust human machine interface based on head movements applied to assistive robotics.

    PubMed

    Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano

    2013-01-01

    This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair.

  16. Robust Human Machine Interface Based on Head Movements Applied to Assistive Robotics

    PubMed Central

    Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano

    2013-01-01

    This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair. PMID:24453877

  17. Robotics supporting autonomy. 5th French Japanese Conference on Bio-ethics.

    PubMed

    Gelin, Rodolphe

    2013-12-01

    The aim of this paper is to propose a new vision on robots. Generally seen as a threat against humanity or at least against employment, we will demonstrate that this new kind of machine can be a support not only for people in loss of autonomy but even for everyone. They will not replace people, they will assist them. The mass production of these companion robots will create a new industry that could take the relay of the automotive and the computer industries in this century. This access to the mass market will require solving technological and acceptability problems by a common work of researchers, engineers, users and the major stakeholders of our society.

  18. Off-highway vehicle technology roadmap.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    2002-02-07

    The off-highway sector is under increasing pressure to reduce operating costs (including fuel costs) and to reduce emissions. Recognizing this, the Society of Automotive Engineers and the U.S. Department of Energy (DOE) convened a workshop in April 2001 (ANL 2001) to (1) determine the interest of the off-highway sector (consisting of agriculture, construction, surface mining, inland marine) in crafting a shared vision of off-highway, heavy machines of the future and (2) identify critical research and development (R&D) needs for minimizing off-highway vehicle emissions while cost-effectively maintaining or enhancing system performance. The workshop also enabled government and industry participants to exchangemore » information. During the workshop, it became clear that the challenges facing the heavy, surface-based off-highway sector can be addressed in three major machine categories: (1) engine/aftertreatment and fuels/lubes, (2) machine systems, and (3) thermal management. Working groups convened to address these topical areas. The status of off-highway technologies was determined, critical technical barriers to achieving future emission standards were identified, and strategies and technologies for reducing fuel consumption were discussed. Priority areas for R&D were identified. Given the apparent success of the discussions at the workshop, several participants from industry agreed to help in the formation of a joint industry/government ''roadmap'' team. The U.S. Department of Energy's Office of Heavy Vehicle Technologies has an extensive role in researching ways to make heavy-duty trucks and trains more efficient, with respect to both fuel usage and air emissions. The workshop participants felt that a joint industry/government research program that addresses the unique needs of the off-highway sector would complement the current research program for highway vehicles. With industry expertise, in-kind contributions, and federal government funding (coupled with the resources at the DOE's national laboratories), an effective program can be planned and executed. This document outlines potential technology R&D pathways to greatly reduce emissions from the off-highway sector and yet greatly reduce fuel costs cost-effectively and safely. The status of technology, technical targets, barriers, and technical approaches toward R&D are presented. Program schedule and milestones are included.« less

  19. A Vision-Aided 3D Path Teaching Method before Narrow Butt Joint Welding

    PubMed Central

    Zeng, Jinle; Chang, Baohua; Du, Dong; Peng, Guodong; Chang, Shuhe; Hong, Yuxiang; Wang, Li; Shan, Jiguo

    2017-01-01

    For better welding quality, accurate path teaching for actuators must be achieved before welding. Due to machining errors, assembly errors, deformations, etc., the actual groove position may be different from the predetermined path. Therefore, it is significant to recognize the actual groove position using machine vision methods and perform an accurate path teaching process. However, during the teaching process of a narrow butt joint, the existing machine vision methods may fail because of poor adaptability, low resolution, and lack of 3D information. This paper proposes a 3D path teaching method for narrow butt joint welding. This method obtains two kinds of visual information nearly at the same time, namely 2D pixel coordinates of the groove in uniform lighting condition and 3D point cloud data of the workpiece surface in cross-line laser lighting condition. The 3D position and pose between the welding torch and groove can be calculated after information fusion. The image resolution can reach 12.5 μm. Experiments are carried out at an actuator speed of 2300 mm/min and groove width of less than 0.1 mm. The results show that this method is suitable for groove recognition before narrow butt joint welding and can be applied in path teaching fields of 3D complex components. PMID:28492481

  20. Applications of color machine vision in the agricultural and food industries

    NASA Astrophysics Data System (ADS)

    Zhang, Min; Ludas, Laszlo I.; Morgan, Mark T.; Krutz, Gary W.; Precetti, Cyrille J.

    1999-01-01

    Color is an important factor in Agricultural and the Food Industry. Agricultural or prepared food products are often grade by producers and consumers using color parameters. Color is used to estimate maturity, sort produce for defects, but also perform genetic screenings or make an aesthetic judgement. The task of sorting produce following a color scale is very complex, requires special illumination and training. Also, this task cannot be performed for long durations without fatigue and loss of accuracy. This paper describes a machine vision system designed to perform color classification in real-time. Applications for sorting a variety of agricultural products are included: e.g. seeds, meat, baked goods, plant and wood.FIrst the theory of color classification of agricultural and biological materials is introduced. Then, some tools for classifier development are presented. Finally, the implementation of the algorithm on real-time image processing hardware and example applications for industry is described. This paper also presented an image analysis algorithm and a prototype machine vision system which was developed for industry. This system will automatically locate the surface of some plants using digital camera and predict information such as size, potential value and type of this plant. The algorithm developed will be feasible for real-time identification in an industrial environment.

  1. Real-time machine vision system using FPGA and soft-core processor

    NASA Astrophysics Data System (ADS)

    Malik, Abdul Waheed; Thörnberg, Benny; Meng, Xiaozhou; Imran, Muhammad

    2012-06-01

    This paper presents a machine vision system for real-time computation of distance and angle of a camera from reference points in the environment. Image pre-processing, component labeling and feature extraction modules were modeled at Register Transfer (RT) level and synthesized for implementation on field programmable gate arrays (FPGA). The extracted image component features were sent from the hardware modules to a soft-core processor, MicroBlaze, for computation of distance and angle. A CMOS imaging sensor operating at a clock frequency of 27MHz was used in our experiments to produce a video stream at the rate of 75 frames per second. Image component labeling and feature extraction modules were running in parallel having a total latency of 13ms. The MicroBlaze was interfaced with the component labeling and feature extraction modules through Fast Simplex Link (FSL). The latency for computing distance and angle of camera from the reference points was measured to be 2ms on the MicroBlaze, running at 100 MHz clock frequency. In this paper, we present the performance analysis, device utilization and power consumption for the designed system. The FPGA based machine vision system that we propose has high frame speed, low latency and a power consumption that is much lower compared to commercially available smart camera solutions.

  2. A Vision-Aided 3D Path Teaching Method before Narrow Butt Joint Welding.

    PubMed

    Zeng, Jinle; Chang, Baohua; Du, Dong; Peng, Guodong; Chang, Shuhe; Hong, Yuxiang; Wang, Li; Shan, Jiguo

    2017-05-11

    For better welding quality, accurate path teaching for actuators must be achieved before welding. Due to machining errors, assembly errors, deformations, etc., the actual groove position may be different from the predetermined path. Therefore, it is significant to recognize the actual groove position using machine vision methods and perform an accurate path teaching process. However, during the teaching process of a narrow butt joint, the existing machine vision methods may fail because of poor adaptability, low resolution, and lack of 3D information. This paper proposes a 3D path teaching method for narrow butt joint welding. This method obtains two kinds of visual information nearly at the same time, namely 2D pixel coordinates of the groove in uniform lighting condition and 3D point cloud data of the workpiece surface in cross-line laser lighting condition. The 3D position and pose between the welding torch and groove can be calculated after information fusion. The image resolution can reach 12.5 μm. Experiments are carried out at an actuator speed of 2300 mm/min and groove width of less than 0.1 mm. The results show that this method is suitable for groove recognition before narrow butt joint welding and can be applied in path teaching fields of 3D complex components.

  3. Toward open set recognition.

    PubMed

    Scheirer, Walter J; de Rezende Rocha, Anderson; Sapkota, Archana; Boult, Terrance E

    2013-07-01

    To date, almost all experimental evaluations of machine learning-based recognition algorithms in computer vision have taken the form of "closed set" recognition, whereby all testing classes are known at training time. A more realistic scenario for vision applications is "open set" recognition, where incomplete knowledge of the world is present at training time, and unknown classes can be submitted to an algorithm during testing. This paper explores the nature of open set recognition and formalizes its definition as a constrained minimization problem. The open set recognition problem is not well addressed by existing algorithms because it requires strong generalization. As a step toward a solution, we introduce a novel "1-vs-set machine," which sculpts a decision space from the marginal distances of a 1-class or binary SVM with a linear kernel. This methodology applies to several different applications in computer vision where open set recognition is a challenging problem, including object recognition and face verification. We consider both in this work, with large scale cross-dataset experiments performed over the Caltech 256 and ImageNet sets, as well as face matching experiments performed over the Labeled Faces in the Wild set. The experiments highlight the effectiveness of machines adapted for open set evaluation compared to existing 1-class and binary SVMs for the same tasks.

  4. Personal mobility and manipulation using robotics, artificial intelligence and advanced control.

    PubMed

    Cooper, Rory A; Ding, Dan; Grindle, Garrett G; Wang, Hongwu

    2007-01-01

    Recent advancements of technologies, including computation, robotics, machine learning, communication, and miniaturization technologies, bring us closer to futuristic visions of compassionate intelligent devices. The missing element is a basic understanding of how to relate human functions (physiological, physical, and cognitive) to the design of intelligent devices and systems that aid and interact with people. Our stakeholder and clinician consultants identified a number of mobility barriers that have been intransigent to traditional approaches. The most important physical obstacles are stairs, steps, curbs, doorways (doors), rough/uneven surfaces, weather hazards (snow, ice), crowded/cluttered spaces, and confined spaces. Focus group participants suggested a number of ways to make interaction simpler, including natural language interfaces such as the ability to say "I want a drink", a library of high level commands (open a door, park the wheelchair, ...), and a touchscreen interface with images so the user could point and use other gestures.

  5. ROBIN: a platform for evaluating automatic target recognition algorithms: I. Overview of the project and presentation of the SAGEM DS competition

    NASA Astrophysics Data System (ADS)

    Duclos, D.; Lonnoy, J.; Guillerm, Q.; Jurie, F.; Herbin, S.; D'Angelo, E.

    2008-04-01

    The last five years have seen a renewal of Automatic Target Recognition applications, mainly because of the latest advances in machine learning techniques. In this context, large collections of image datasets are essential for training algorithms as well as for their evaluation. Indeed, the recent proliferation of recognition algorithms, generally applied to slightly different problems, make their comparisons through clean evaluation campaigns necessary. The ROBIN project tries to fulfil these two needs by putting unclassified datasets, ground truths, competitions and metrics for the evaluation of ATR algorithms at the disposition of the scientific community. The scope of this project includes single and multi-class generic target detection and generic target recognition, in military and security contexts. From our knowledge, it is the first time that a database of this importance (several hundred thousands of visible and infrared hand annotated images) has been publicly released. Funded by the French Ministry of Defence (DGA) and by the French Ministry of Research, ROBIN is one of the ten Techno-vision projects. Techno-vision is a large and ambitious government initiative for building evaluation means for computer vision technologies, for various application contexts. ROBIN's consortium includes major companies and research centres involved in Computer Vision R&D in the field of defence: Bertin Technologies, CNES, ECA, DGA, EADS, INRIA, ONERA, MBDA, SAGEM, THALES. This paper, which first gives an overview of the whole project, is focused on one of ROBIN's key competitions, the SAGEM Defence Security database. This dataset contains more than eight hundred ground and aerial infrared images of six different vehicles in cluttered scenes including distracters. Two different sets of data are available for each target. The first set includes different views of each vehicle at close range in a "simple" background, and can be used to train algorithms. The second set contains many views of the same vehicle in different contexts and situations simulating operational scenarios.

  6. Machine vision method for online surface inspection of easy open can ends

    NASA Astrophysics Data System (ADS)

    Mariño, Perfecto; Pastoriza, Vicente; Santamaría, Miguel

    2006-10-01

    Easy open can end manufacturing process in the food canning sector currently makes use of a manual, non-destructive testing procedure to guarantee can end repair coating quality. This surface inspection is based on a visual inspection made by human inspectors. Due to the high production rate (100 to 500 ends per minute) only a small part of each lot is verified (statistical sampling), then an automatic, online, inspection system, based on machine vision, has been developed to improve this quality control. The inspection system uses a fuzzy model to make the acceptance/rejection decision for each can end from the information obtained by the vision sensor. In this work, the inspection method is presented. This surface inspection system checks the total production, classifies the ends in agreement with an expert human inspector, supplies interpretability to the operators in order to find out the failure causes and reduce mean time to repair during failures, and allows to modify the minimum can end repair coating quality.

  7. Predicting pork loin intramuscular fat using computer vision system.

    PubMed

    Liu, J-H; Sun, X; Young, J M; Bachmeier, L A; Newman, D J

    2018-09-01

    The objective of this study was to investigate the ability of computer vision system to predict pork intramuscular fat percentage (IMF%). Center-cut loin samples (n = 85) were trimmed of subcutaneous fat and connective tissue. Images were acquired and pixels were segregated to estimate image IMF% and 18 image color features for each image. Subjective IMF% was determined by a trained grader. Ether extract IMF% was calculated using ether extract method. Image color features and image IMF% were used as predictors for stepwise regression and support vector machine models. Results showed that subjective IMF% had a correlation of 0.81 with ether extract IMF% while the image IMF% had a 0.66 correlation with ether extract IMF%. Accuracy rates for regression models were 0.63 for stepwise and 0.75 for support vector machine. Although subjective IMF% has shown to have better prediction, results from computer vision system demonstrates the potential of being used as a tool in predicting pork IMF% in the future. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Experimental results in autonomous landing approaches by dynamic machine vision

    NASA Astrophysics Data System (ADS)

    Dickmanns, Ernst D.; Werner, Stefan; Kraus, S.; Schell, R.

    1994-07-01

    The 4-D approach to dynamic machine vision, exploiting full spatio-temporal models of the process to be controlled, has been applied to on board autonomous landing approaches of aircraft. Aside from image sequence processing, for which it was developed initially, it is also used for data fusion from a range of sensors. By prediction error feedback an internal representation of the aircraft state relative to the runway in 3-D space and time is servo- maintained in the interpretation process, from which the control applications required are being derived. The validity and efficiency of the approach have been proven both in hardware- in-the-loop simulations and in flight experiments with a twin turboprop aircraft Do128 under perturbations from cross winds and wind gusts. The software package has been ported to `C' and onto a new transputer image processing platform; the system has been expanded for bifocal vision with two cameras of different focal length mounted fixed relative to each other on a two-axes platform for viewing direction control.

  9. Autonomous proximity operations using machine vision for trajectory control and pose estimation

    NASA Technical Reports Server (NTRS)

    Cleghorn, Timothy F.; Sternberg, Stanley R.

    1991-01-01

    A machine vision algorithm was developed which permits guidance control to be maintained during autonomous proximity operations. At present this algorithm exists as a simulation, running upon an 80386 based personal computer, using a ModelMATE CAD package to render the target vehicle. However, the algorithm is sufficiently simple, so that following off-line training on a known target vehicle, it should run in real time with existing vision hardware. The basis of the algorithm is a sequence of single camera images of the target vehicle, upon which radial transforms were performed. Selected points of the resulting radial signatures are fed through a decision tree, to determine whether the signature matches that of the known reference signatures for a particular view of the target. Based upon recognized scenes, the position of the maneuvering vehicle with respect to the target vehicles can be calculated, and adjustments made in the former's trajectory. In addition, the pose and spin rates of the target satellite can be estimated using this method.

  10. Self-supervised learning as an enabling technology for future space exploration robots: ISS experiments on monocular distance learning

    NASA Astrophysics Data System (ADS)

    van Hecke, Kevin; de Croon, Guido C. H. E.; Hennes, Daniel; Setterfield, Timothy P.; Saenz-Otero, Alvar; Izzo, Dario

    2017-11-01

    Although machine learning holds an enormous promise for autonomous space robots, it is currently not employed because of the inherent uncertain outcome of learning processes. In this article we investigate a learning mechanism, Self-Supervised Learning (SSL), which is very reliable and hence an important candidate for real-world deployment even on safety-critical systems such as space robots. To demonstrate this reliability, we introduce a novel SSL setup that allows a stereo vision equipped robot to cope with the failure of one of its cameras. The setup learns to estimate average depth using a monocular image, by using the stereo vision depths from the past as trusted ground truth. We present preliminary results from an experiment on the International Space Station (ISS) performed with the MIT/NASA SPHERES VERTIGO satellite. The presented experiments were performed on October 8th, 2015 on board the ISS. The main goals were (1) data gathering, and (2) navigation based on stereo vision. First the astronaut Kimiya Yui moved the satellite around the Japanese Experiment Module to gather stereo vision data for learning. Subsequently, the satellite freely explored the space in the module based on its (trusted) stereo vision system and a pre-programmed exploration behavior, while simultaneously performing the self-supervised learning of monocular depth estimation on board. The two main goals were successfully achieved, representing the first online learning robotic experiments in space. These results lay the groundwork for a follow-up experiment in which the satellite will use the learned single-camera depth estimation for autonomous exploration in the ISS, and are an advancement towards future space robots that continuously improve their navigation capabilities over time, even in harsh and completely unknown space environments.

  11. Proceedings of the 1986 IEEE international conference on systems, man and cybernetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1986-01-01

    This book presents the papers given at a conference on man-machine systems. Topics considered at the conference included neural model-based cognitive theory and engineering, user interfaces, adaptive and learning systems, human interaction with robotics, decision making, the testing and evaluation of expert systems, software development, international conflict resolution, intelligent interfaces, automation in man-machine system design aiding, knowledge acquisition in expert systems, advanced architectures for artificial intelligence, pattern recognition, knowledge bases, and machine vision.

  12. On-Tree Mango Fruit Size Estimation Using RGB-D Images

    PubMed Central

    Wang, Zhenglin; Verma, Brijesh

    2017-01-01

    In-field mango fruit sizing is useful for estimation of fruit maturation and size distribution, informing the decision to harvest, harvest resourcing (e.g., tray insert sizes), and marketing. In-field machine vision imaging has been used for fruit count, but assessment of fruit size from images also requires estimation of camera-to-fruit distance. Low cost examples of three technologies for assessment of camera to fruit distance were assessed: a RGB-D (depth) camera, a stereo vision camera and a Time of Flight (ToF) laser rangefinder. The RGB-D camera was recommended on cost and performance, although it functioned poorly in direct sunlight. The RGB-D camera was calibrated, and depth information matched to the RGB image. To detect fruit, a cascade detection with histogram of oriented gradients (HOG) feature was used, then Otsu’s method, followed by color thresholding was applied in the CIE L*a*b* color space to remove background objects (leaves, branches etc.). A one-dimensional (1D) filter was developed to remove the fruit pedicles, and an ellipse fitting method employed to identify well-separated fruit. Finally, fruit lineal dimensions were calculated using the RGB-D depth information, fruit image size and the thin lens formula. A Root Mean Square Error (RMSE) = 4.9 and 4.3 mm was achieved for estimated fruit length and width, respectively, relative to manual measurement, for which repeated human measures were characterized by a standard deviation of 1.2 mm. In conclusion, the RGB-D method for rapid in-field mango fruit size estimation is practical in terms of cost and ease of use, but cannot be used in direct intense sunshine. We believe this work represents the first practical implementation of machine vision fruit sizing in field, with practicality gauged in terms of cost and simplicity of operation. PMID:29182534

  13. Thoughts turned into high-level commands: Proof-of-concept study of a vision-guided robot arm driven by functional MRI (fMRI) signals.

    PubMed

    Minati, Ludovico; Nigri, Anna; Rosazza, Cristina; Bruzzone, Maria Grazia

    2012-06-01

    Previous studies have demonstrated the possibility of using functional MRI to control a robot arm through a brain-machine interface by directly coupling haemodynamic activity in the sensory-motor cortex to the position of two axes. Here, we extend this work by implementing interaction at a more abstract level, whereby imagined actions deliver structured commands to a robot arm guided by a machine vision system. Rather than extracting signals from a small number of pre-selected regions, the proposed system adaptively determines at individual level how to map representative brain areas to the input nodes of a classifier network. In this initial study, a median action recognition accuracy of 90% was attained on five volunteers performing a game consisting of collecting randomly positioned coloured pawns and placing them into cups. The "pawn" and "cup" instructions were imparted through four mental imaginery tasks, linked to robot arm actions by a state machine. With the current implementation in MatLab language the median action recognition time was 24.3s and the robot execution time was 17.7s. We demonstrate the notion of combining haemodynamic brain-machine interfacing with computer vision to implement interaction at the level of high-level commands rather than individual movements, which may find application in future fMRI approaches relevant to brain-lesioned patients, and provide source code supporting further work on larger command sets and real-time processing. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.

  14. Head-Mounted Display Technology for Low Vision Rehabilitation and Vision Enhancement

    PubMed Central

    Ehrlich, Joshua R.; Ojeda, Lauro V.; Wicker, Donna; Day, Sherry; Howson, Ashley; Lakshminarayanan, Vasudevan; Moroi, Sayoko E.

    2017-01-01

    Purpose To describe the various types of head-mounted display technology, their optical and human factors considerations, and their potential for use in low vision rehabilitation and vision enhancement. Design Expert perspective. Methods An overview of head-mounted display technology by an interdisciplinary team of experts drawing on key literature in the field. Results Head-mounted display technologies can be classified based on their display type and optical design. See-through displays such as retinal projection devices have the greatest potential for use as low vision aids. Devices vary by their relationship to the user’s eyes, field of view, illumination, resolution, color, stereopsis, effect on head motion and user interface. These optical and human factors considerations are important when selecting head-mounted displays for specific applications and patient groups. Conclusions Head-mounted display technologies may offer advantages over conventional low vision aids. Future research should compare head-mounted displays to commonly prescribed low vision aids in order to compare their effectiveness in addressing the impairments and rehabilitation goals of diverse patient populations. PMID:28048975

  15. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals’ Behaviour

    PubMed Central

    Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola

    2016-01-01

    Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs’ behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals’ quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog’s shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non-human animal behaviour science. Further improvements and validation are needed, and future applications and limitations are discussed. PMID:27415814

  16. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals' Behaviour.

    PubMed

    Barnard, Shanis; Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola

    2016-01-01

    Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs' behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals' quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog's shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non-human animal behaviour science. Further improvements and validation are needed, and future applications and limitations are discussed.

  17. Aquatic Toxic Analysis by Monitoring Fish Behavior Using Computer Vision: A Recent Progress

    PubMed Central

    Fu, Longwen; Liu, Zuoyi

    2018-01-01

    Video tracking based biological early warning system achieved a great progress with advanced computer vision and machine learning methods. Ability of video tracking of multiple biological organisms has been largely improved in recent years. Video based behavioral monitoring has become a common tool for acquiring quantified behavioral data for aquatic risk assessment. Investigation of behavioral responses under chemical and environmental stress has been boosted by rapidly developed machine learning and artificial intelligence. In this paper, we introduce the fundamental of video tracking and present the pioneer works in precise tracking of a group of individuals in 2D and 3D space. Technical and practical issues suffered in video tracking are explained. Subsequently, the toxic analysis based on fish behavioral data is summarized. Frequently used computational methods and machine learning are explained with their applications in aquatic toxicity detection and abnormal pattern analysis. Finally, advantages of recent developed deep learning approach in toxic prediction are presented. PMID:29849612

  18. A Developmental Approach to Machine Learning?

    PubMed Central

    Smith, Linda B.; Slone, Lauren K.

    2017-01-01

    Visual learning depends on both the algorithms and the training material. This essay considers the natural statistics of infant- and toddler-egocentric vision. These natural training sets for human visual object recognition are very different from the training data fed into machine vision systems. Rather than equal experiences with all kinds of things, toddlers experience extremely skewed distributions with many repeated occurrences of a very few things. And though highly variable when considered as a whole, individual views of things are experienced in a specific order – with slow, smooth visual changes moment-to-moment, and developmentally ordered transitions in scene content. We propose that the skewed, ordered, biased visual experiences of infants and toddlers are the training data that allow human learners to develop a way to recognize everything, both the pervasively present entities and the rarely encountered ones. The joint consideration of real-world statistics for learning by researchers of human and machine learning seems likely to bring advances in both disciplines. PMID:29259573

  19. Change detection and classification of land cover in multispectral satellite imagery using clustering of sparse approximations (CoSA) over learned feature dictionaries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moody, Daniela I.; Brumby, Steven P.; Rowland, Joel C.

    Neuromimetic machine vision and pattern recognition algorithms are of great interest for landscape characterization and change detection in satellite imagery in support of global climate change science and modeling. We present results from an ongoing effort to extend machine vision methods to the environmental sciences, using adaptive sparse signal processing combined with machine learning. A Hebbian learning rule is used to build multispectral, multiresolution dictionaries from regional satellite normalized band difference index data. Land cover labels are automatically generated via our CoSA algorithm: Clustering of Sparse Approximations, using a clustering distance metric that combines spectral and spatial textural characteristics tomore » help separate geologic, vegetative, and hydrologie features. We demonstrate our method on example Worldview-2 satellite images of an Arctic region, and use CoSA labels to detect seasonal surface changes. In conclusion, our results suggest that neuroscience-based models are a promising approach to practical pattern recognition and change detection problems in remote sensing.« less

  20. Change detection and classification of land cover in multispectral satellite imagery using clustering of sparse approximations (CoSA) over learned feature dictionaries

    DOE PAGES

    Moody, Daniela I.; Brumby, Steven P.; Rowland, Joel C.; ...

    2014-10-01

    Neuromimetic machine vision and pattern recognition algorithms are of great interest for landscape characterization and change detection in satellite imagery in support of global climate change science and modeling. We present results from an ongoing effort to extend machine vision methods to the environmental sciences, using adaptive sparse signal processing combined with machine learning. A Hebbian learning rule is used to build multispectral, multiresolution dictionaries from regional satellite normalized band difference index data. Land cover labels are automatically generated via our CoSA algorithm: Clustering of Sparse Approximations, using a clustering distance metric that combines spectral and spatial textural characteristics tomore » help separate geologic, vegetative, and hydrologie features. We demonstrate our method on example Worldview-2 satellite images of an Arctic region, and use CoSA labels to detect seasonal surface changes. In conclusion, our results suggest that neuroscience-based models are a promising approach to practical pattern recognition and change detection problems in remote sensing.« less

  1. A machine vision system for micro-EDM based on linux

    NASA Astrophysics Data System (ADS)

    Guo, Rui; Zhao, Wansheng; Li, Gang; Li, Zhiyong; Zhang, Yong

    2006-11-01

    Due to the high precision and good surface quality that it can give, Electrical Discharge Machining (EDM) is potentially an important process for the fabrication of micro-tools and micro-components. However, a number of issues remain unsolved before micro-EDM becomes a reliable process with repeatable results. To deal with the difficulties in micro electrodes on-line fabrication and tool wear compensation, a micro-EDM machine vision system is developed with a Charge Coupled Device (CCD) camera, with an optical resolution of 1.61μm and an overall magnification of 113~729. Based on the Linux operating system, an image capturing program is developed with the V4L2 API, and an image processing program is exploited by using OpenCV. The contour of micro electrodes can be extracted by means of the Canny edge detector. Through the system calibration, the micro electrodes diameter can be measured on-line. Experiments have been carried out to prove its performance, and the reasons of measurement error are also analyzed.

  2. A light-stimulated synaptic device based on graphene hybrid phototransistor

    NASA Astrophysics Data System (ADS)

    Qin, Shuchao; Wang, Fengqiu; Liu, Yujie; Wan, Qing; Wang, Xinran; Xu, Yongbing; Shi, Yi; Wang, Xiaomu; Zhang, Rong

    2017-09-01

    Neuromorphic chips refer to an unconventional computing architecture that is modelled on biological brains. They are increasingly employed for processing sensory data for machine vision, context cognition, and decision making. Despite rapid advances, neuromorphic computing has remained largely an electronic technology, making it a challenge to access the superior computing features provided by photons, or to directly process vision data that has increasing importance to artificial intelligence. Here we report a novel light-stimulated synaptic device based on a graphene-carbon nanotube hybrid phototransistor. Significantly, the device can respond to optical stimuli in a highly neuron-like fashion and exhibits flexible tuning of both short- and long-term plasticity. These features combined with the spatiotemporal processability make our device a capable counterpart to today’s electrically-driven artificial synapses, with superior reconfigurable capabilities. In addition, our device allows for generic optical spike processing, which provides a foundation for more sophisticated computing. The silicon-compatible, multifunctional photosensitive synapse opens up a new opportunity for neural networks enabled by photonics and extends current neuromorphic systems in terms of system complexities and functionalities.

  3. High-Level Vision and Planning Workshop Proceedings

    DTIC Science & Technology

    1989-08-01

    Correspondence in Line Drawings of Multiple View-. In Proc. of 8th Intern. Joint Conf. on Artificial intellignece . 1983. [63] Tomiyasu, K. Tutorial...joint U.S.-Israeli workshop on artificial intelligence are provided in this Institute for Defense Analyses document. This document is based on a broad...participants is provided along with applicable references for individual papers. 14. SUBJECT TERMS 15. NUMBER OF PAGES Artificial Intelligence; Machine Vision

  4. Teaching Translation and Interpreting 2: Insights, Aims, Visions. [Selection of] Papers from the Second Language International Conference (Elsinore, Denmark, June 4-6, 1993).

    ERIC Educational Resources Information Center

    Dollerup, Cay, Ed.; Lindegaard, Annette, Ed.

    This selection of papers starts with insights into multi- and plurilingual settings, then proceeds to discussions of aims for practical work with students, and ends with visions of future developments within translation for the mass media and the impact of machine translation. Papers are: "Interpreting at the European Commission";…

  5. Augmentation of Cognition and Perception Through Advanced Synthetic Vision Technology

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Bailey, Randall E.; Arthur, Jarvis J.; Williams, Steve P.; McNabb, Jennifer

    2005-01-01

    Synthetic Vision System technology augments reality and creates a virtual visual meteorological condition that extends a pilot's cognitive and perceptual capabilities during flight operations when outside visibility is restricted. The paper describes the NASA Synthetic Vision System for commercial aviation with an emphasis on how the technology achieves Augmented Cognition objectives.

  6. Some Examples Of Image Warping For Low Vision Prosthesis

    NASA Astrophysics Data System (ADS)

    Juday, Richard D.; Loshin, David S.

    1988-08-01

    NASA and Texas Instruments have developed an image processor, the Programmable Remapper 1, for certain functions in machine vision. The Remapper performs a highly arbitrary geometric warping of an image at video rate. It might ultimately be shrunk to a size and cost that could allow its use in a low-vision prosthesis. We have developed coordinate warpings for retinitis pigmentosa (tunnel vision) and for maculapathy (loss of central field) that are intended to make best use of the patient's remaining viable retina. The rationales and mathematics are presented for some warpings that we will try in clinical studies using the Remapper's prototype. (Recorded video imagery was shown at the conference for the maculapathy remapping.

  7. BIG DATA ANALYTICS AND PRECISION ANIMAL AGRICULTURE SYMPOSIUM: Machine learning and data mining advance predictive big data analysis in precision animal agriculture.

    PubMed

    Morota, Gota; Ventura, Ricardo V; Silva, Fabyano F; Koyama, Masanori; Fernando, Samodha C

    2018-04-14

    Precision animal agriculture is poised to rise to prominence in the livestock enterprise in the domains of management, production, welfare, sustainability, health surveillance, and environmental footprint. Considerable progress has been made in the use of tools to routinely monitor and collect information from animals and farms in a less laborious manner than before. These efforts have enabled the animal sciences to embark on information technology-driven discoveries to improve animal agriculture. However, the growing amount and complexity of data generated by fully automated, high-throughput data recording or phenotyping platforms, including digital images, sensor and sound data, unmanned systems, and information obtained from real-time noninvasive computer vision, pose challenges to the successful implementation of precision animal agriculture. The emerging fields of machine learning and data mining are expected to be instrumental in helping meet the daunting challenges facing global agriculture. Yet, their impact and potential in "big data" analysis have not been adequately appreciated in the animal science community, where this recognition has remained only fragmentary. To address such knowledge gaps, this article outlines a framework for machine learning and data mining and offers a glimpse into how they can be applied to solve pressing problems in animal sciences.

  8. Methodology for creating dedicated machine and algorithm on sunflower counting

    NASA Astrophysics Data System (ADS)

    Muracciole, Vincent; Plainchault, Patrick; Mannino, Maria-Rosaria; Bertrand, Dominique; Vigouroux, Bertrand

    2007-09-01

    In order to sell grain lots in European countries, seed industries need a government certification. This certification requests purity testing, seed counting in order to quantify specified seed species and other impurities in lots, and germination testing. These analyses are carried out within the framework of international trade according to the methods of the International Seed Testing Association. Presently these different analyses are still achieved manually by skilled operators. Previous works have already shown that seeds can be characterized by around 110 visual features (morphology, colour, texture), and thus have presented several identification algorithms. Until now, most of the works in this domain are computer based. The approach presented in this article is based on the design of dedicated electronic vision machine aimed to identify and sort seeds. This machine is composed of a FPGA (Field Programmable Gate Array), a DSP (Digital Signal Processor) and a PC bearing the GUI (Human Machine Interface) of the system. Its operation relies on the stroboscopic image acquisition of a seed falling in front of a camera. A first machine was designed according to this approach, in order to simulate all the vision chain (image acquisition, feature extraction, identification) under the Matlab environment. In order to perform this task into dedicated hardware, all these algorithms were developed without the use of the Matlab toolbox. The objective of this article is to present a design methodology for a special purpose identification algorithm based on distance between groups into dedicated hardware machine for seed counting.

  9. Actualities and Development of Heavy-Duty CNC Machine Tool Thermal Error Monitoring Technology

    NASA Astrophysics Data System (ADS)

    Zhou, Zu-De; Gui, Lin; Tan, Yue-Gang; Liu, Ming-Yao; Liu, Yi; Li, Rui-Ya

    2017-09-01

    Thermal error monitoring technology is the key technological support to solve the thermal error problem of heavy-duty CNC (computer numerical control) machine tools. Currently, there are many review literatures introducing the thermal error research of CNC machine tools, but those mainly focus on the thermal issues in small and medium-sized CNC machine tools and seldom introduce thermal error monitoring technologies. This paper gives an overview of the research on the thermal error of CNC machine tools and emphasizes the study of thermal error of the heavy-duty CNC machine tool in three areas. These areas are the causes of thermal error of heavy-duty CNC machine tool and the issues with the temperature monitoring technology and thermal deformation monitoring technology. A new optical measurement technology called the "fiber Bragg grating (FBG) distributed sensing technology" for heavy-duty CNC machine tools is introduced in detail. This technology forms an intelligent sensing and monitoring system for heavy-duty CNC machine tools. This paper fills in the blank of this kind of review articles to guide the development of this industry field and opens up new areas of research on the heavy-duty CNC machine tool thermal error.

  10. Vision servo of industrial robot: A review

    NASA Astrophysics Data System (ADS)

    Zhang, Yujin

    2018-04-01

    Robot technology has been implemented to various areas of production and life. With the continuous development of robot applications, requirements of the robot are also getting higher and higher. In order to get better perception of the robots, vision sensors have been widely used in industrial robots. In this paper, application directions of industrial robots are reviewed. The development, classification and application of robot vision servo technology are discussed, and the development prospect of industrial robot vision servo technology is proposed.

  11. Landforms of the conterminous United States: a digital shaded-relief portrayal

    USGS Publications Warehouse

    Thelin, Gail P.; Pike, Richard J.

    1991-01-01

    Our map was made by digital image-processing, a technical specialty related to the broader fields of computer graphics and machine vision (Dawson, 1987; Kennie and McLaren, 1988). The technology includes the many spacially based operations first brought together and developed systematically to manipulate Ranger, Mariner, Landsat, and other images that are reassembled from spacecraft telemetry in a raster or scan-line arrangement of square-grid elements (Nathan, 1966; Castleman, 1979; Sheldon, 1987). These computer procedures have been successfully transferred to landform analysis from remote-sensing applications by substituting terrain heights or sea-floor depths for the customary values of electromagnetic radiation obtained from satellites an stored in digital arrays of pixels (Batson and others, 1975).

  12. Research on key technology of the verification system of steel rule based on vision measurement

    NASA Astrophysics Data System (ADS)

    Jia, Siyuan; Wang, Zhong; Liu, Changjie; Fu, Luhua; Li, Yiming; Lu, Ruijun

    2018-01-01

    The steel rule plays an important role in quantity transmission. However, the traditional verification method of steel rule based on manual operation and reading brings about low precision and low efficiency. A machine vison based verification system of steel rule is designed referring to JJG1-1999-Verificaiton Regulation of Steel Rule [1]. What differentiates this system is that it uses a new calibration method of pixel equivalent and decontaminates the surface of steel rule. Experiments show that these two methods fully meet the requirements of the verification system. Measuring results strongly prove that these methods not only meet the precision of verification regulation, but also improve the reliability and efficiency of the verification system.

  13. Analysis of live cell images: Methods, tools and opportunities.

    PubMed

    Nketia, Thomas A; Sailem, Heba; Rohde, Gustavo; Machiraju, Raghu; Rittscher, Jens

    2017-02-15

    Advances in optical microscopy, biosensors and cell culturing technologies have transformed live cell imaging. Thanks to these advances live cell imaging plays an increasingly important role in basic biology research as well as at all stages of drug development. Image analysis methods are needed to extract quantitative information from these vast and complex data sets. The aim of this review is to provide an overview of available image analysis methods for live cell imaging, in particular required preprocessing image segmentation, cell tracking and data visualisation methods. The potential opportunities recent advances in machine learning, especially deep learning, and computer vision provide are being discussed. This review includes overview of the different available software packages and toolkits. Copyright © 2017. Published by Elsevier Inc.

  14. Compact Microscope Imaging System With Intelligent Controls Improved

    NASA Technical Reports Server (NTRS)

    McDowell, Mark

    2004-01-01

    The Compact Microscope Imaging System (CMIS) with intelligent controls is a diagnostic microscope analysis tool with intelligent controls for use in space, industrial, medical, and security applications. This compact miniature microscope, which can perform tasks usually reserved for conventional microscopes, has unique advantages in the fields of microscopy, biomedical research, inline process inspection, and space science. Its unique approach integrates a machine vision technique with an instrumentation and control technique that provides intelligence via the use of adaptive neural networks. The CMIS system was developed at the NASA Glenn Research Center specifically for interface detection used for colloid hard spheres experiments; biological cell detection for patch clamping, cell movement, and tracking; and detection of anode and cathode defects for laboratory samples using microscope technology.

  15. Gearing up to the factory of the future

    NASA Astrophysics Data System (ADS)

    Godfrey, D. E.

    1985-01-01

    The features of factories and manufacturing techniques and tools of the near future are discussed. The spur to incorporate new technologies on the factory floor will originate in management, who must guide the interfacing of computer-enhanced equipment with traditional manpower, materials and machines. Electronic control with responsiveness and flexibility will be the key concept in an integrated approach to processing materials. Microprocessor controlled laser and fluid cutters add accuracy to cutting operations. Unattended operation will become feasible when automated inspection is added to a work station through developments in robot vision. Optimum shop management will be achieved through AI programming of parts manufacturing, optimized work flows, and cost accounting. The automation enhancements will allow designers to affect directly parts being produced on the factory floor.

  16. MAGNET ENGINEERING AND TEST RESULTS OF THE HIGH FIELD MAGNET R AND D PROGRAM AT BNL.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    COZZOLINO,J.; ANERELLA,M.; ESCALLIER,J.

    2002-08-04

    The Superconducting Magnet Division at Brookhaven National Laboratory (BNL) has been carrying out design, engineering, and technology development of high performance magnets for future accelerators. High Temperature Superconductors (HTS) play a major role in the BNL vision of a few high performance interaction region (IR) magnets that would be placed in a machine about ten years from now. This paper presents the engineering design of a ''react and wind'' Nb{sub 3}Sn magnet that will provide a 12 Tesla background field on HTS coils. In addition, the coil production tooling as well as the most recent 10-turn R&D coil test resultsmore » will be discussed.« less

  17. Learning surface molecular structures via machine vision

    DOE PAGES

    Ziatdinov, Maxim; Maksov, Artem; Kalinin, Sergei V.

    2017-08-10

    Recent advances in high resolution scanning transmission electron and scanning probe microscopies have allowed researchers to perform measurements of materials structural parameters and functional properties in real space with a picometre precision. In many technologically relevant atomic and/or molecular systems, however, the information of interest is distributed spatially in a non-uniform manner and may have a complex multi-dimensional nature. One of the critical issues, therefore, lies in being able to accurately identify (‘read out’) all the individual building blocks in different atomic/molecular architectures, as well as more complex patterns that these blocks may form, on a scale of hundreds andmore » thousands of individual atomic/molecular units. Here we employ machine vision to read and recognize complex molecular assemblies on surfaces. Specifically, we combine Markov random field model and convolutional neural networks to classify structural and rotational states of all individual building blocks in molecular assembly on the metallic surface visualized in high-resolution scanning tunneling microscopy measurements. We show how the obtained full decoding of the system allows us to directly construct a pair density function—a centerpiece in analysis of disorder-property relationship paradigm—as well as to analyze spatial correlations between multiple order parameters at the nanoscale, and elucidate reaction pathway involving molecular conformation changes. Here, the method represents a significant shift in our way of analyzing atomic and/or molecular resolved microscopic images and can be applied to variety of other microscopic measurements of structural, electronic, and magnetic orders in different condensed matter systems.« less

  18. Learning surface molecular structures via machine vision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ziatdinov, Maxim; Maksov, Artem; Kalinin, Sergei V.

    Recent advances in high resolution scanning transmission electron and scanning probe microscopies have allowed researchers to perform measurements of materials structural parameters and functional properties in real space with a picometre precision. In many technologically relevant atomic and/or molecular systems, however, the information of interest is distributed spatially in a non-uniform manner and may have a complex multi-dimensional nature. One of the critical issues, therefore, lies in being able to accurately identify (‘read out’) all the individual building blocks in different atomic/molecular architectures, as well as more complex patterns that these blocks may form, on a scale of hundreds andmore » thousands of individual atomic/molecular units. Here we employ machine vision to read and recognize complex molecular assemblies on surfaces. Specifically, we combine Markov random field model and convolutional neural networks to classify structural and rotational states of all individual building blocks in molecular assembly on the metallic surface visualized in high-resolution scanning tunneling microscopy measurements. We show how the obtained full decoding of the system allows us to directly construct a pair density function—a centerpiece in analysis of disorder-property relationship paradigm—as well as to analyze spatial correlations between multiple order parameters at the nanoscale, and elucidate reaction pathway involving molecular conformation changes. Here, the method represents a significant shift in our way of analyzing atomic and/or molecular resolved microscopic images and can be applied to variety of other microscopic measurements of structural, electronic, and magnetic orders in different condensed matter systems.« less

  19. USAF Summer Faculty Research Program. 1981 Research Reports. Volume I.

    DTIC Science & Technology

    1981-10-01

    Kent, OH 44242 (216) 672-2816 Dr. Martin D. Altschuler Degree: PhD, Physics and Astronomy, 1964 Associate Professor Specialty: Robot Vision, Surface...line inspection and control, computer- aided manufacturing, robot vision, mapping of machine parts and castings, etc. The technique we developed...posture, reduced healing time and bacteria level, and improved capacity for work endurance and efficiency. 1 ,2𔃽 Federal agencies, such as the FDA and

  20. MS Detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koppenaal, David W.; Barinaga, Charles J.; Denton, M Bonner B.

    2005-11-01

    Good eyesight is often taken for granted, a situation that everyone appreciates once vision begins to fade with age. New eyeglasses or contact lenses are traditional ways to improve vision, but recent new technology, i.e. LASIK laser eye surgery, provides a new and exciting means for marked vision restoration and improvement. In mass spectrometry, detectors are the 'eyes' of the MS instrument. These 'eyes' have also been taken for granted. New detectors and new technologies are likewise needed to correct, improve, and extend ion detection and hence, our 'chemical vision'. The purpose of this report is to review and assessmore » current MS detector technology and to provide a glimpse towards future detector technologies. It is hoped that the report will also serve to motivate interest, prompt ideas, and inspire new visions for ion detection research.« less

  1. Insect vision as model for machine vision

    NASA Astrophysics Data System (ADS)

    Osorio, D.; Sobey, Peter J.

    1992-11-01

    The neural architecture, neurophysiology and behavioral abilities of insect vision are described, and compared with that of mammals. Insects have a hardwired neural architecture of highly differentiated neurons, quite different from the cerebral cortex, yet their behavioral abilities are in important respects similar to those of mammals. These observations challenge the view that the key to the power of biological neural computation is distributed processing by a plastic, highly interconnected, network of individually undifferentiated and unreliable neurons that has been a dominant picture of biological computation since Pitts and McCulloch's seminal work in the 1940's.

  2. University NanoSat Program: AggieSat3

    DTIC Science & Technology

    2009-06-01

    commercially available product for stereo machine vision developed by Point Grey Research. The current binocular BumbleBee2® system incorporates two...and Fellow of the American Society of Mechanical Engineers (ASME) in 1997. She was awarded the 2007 J. Leland "Lee" Atwood Award from the ASEE...AggieSat2 satellite programs. Additional experience gained in the area of drawing standards, machining capabilities, solid modeling, safety

  3. Identification Of Cells With A Compact Microscope Imaging System With Intelligent Controls

    NASA Technical Reports Server (NTRS)

    McDowell, Mark (Inventor)

    2006-01-01

    A Microscope Imaging System (CMIS) with intelligent controls is disclosed that provides techniques for scanning, identifying, detecting and tracking mic?oscopic changes in selected characteristics or features of various surfaces including, but not limited to, cells, spheres, and manufactured products subject to difficult-to-see imperfections. The practice of the present invention provides applications that include colloidal hard spheres experiments, biological cell detection for patch clamping, cell movement and tracking, as well as defect identification in products, such as semiconductor devices, where surface damage can be significant, but difficult to detect. The CMIS system is a machine vision system, which combines intelligent image processing with remote control capabilities and provides the ability to autofocus on a microscope sample, automatically scan an image, and perform machine vision analysis on multiple samples simultaneously.

  4. Man-machine interactive imaging and data processing using high-speed digital mass storage

    NASA Technical Reports Server (NTRS)

    Alsberg, H.; Nathan, R.

    1975-01-01

    The role of vision in teleoperation has been recognized as an important element in the man-machine control loop. In most applications of remote manipulation, direct vision cannot be used. To overcome this handicap, the human operator's control capabilities are augmented by a television system. This medium provides a practical and useful link between workspace and the control station from which the operator perform his tasks. Human performance deteriorates when the images are degraded as a result of instrumental and transmission limitations. Image enhancement is used to bring out selected qualities in a picture to increase the perception of the observer. A general purpose digital computer, an extensive special purpose software system is used to perform an almost unlimited repertoire of processing operations.

  5. Tracking of Cells with a Compact Microscope Imaging System with Intelligent Controls

    NASA Technical Reports Server (NTRS)

    McDowell, Mark (Inventor)

    2007-01-01

    A Microscope Imaging System (CMIS) with intelligent controls is disclosed that provides techniques for scanning, identifying, detecting and tracking microscopic changes in selected characteristics or features of various surfaces including, but not limited to, cells, spheres, and manufactured products subject to difficult-to-see imperfections. The practice of the present invention provides applications that include colloidal hard spheres experiments, biological cell detection for patch clamping, cell movement and tracking, as well as defect identification in products, such as semiconductor devices, where surface damage can be significant, but difficult to detect. The CMIS system is a machine vision system, which combines intelligent image processing with remote control capabilities and provides the ability to autofocus on a microscope sample, automatically scan an image, and perform machine vision analysis on multiple samples simultaneously

  6. Tracking of cells with a compact microscope imaging system with intelligent controls

    NASA Technical Reports Server (NTRS)

    McDowell, Mark (Inventor)

    2007-01-01

    A Microscope Imaging System (CMIS) with intelligent controls is disclosed that provides techniques for scanning, identifying, detecting and tracking microscopic changes in selected characteristics or features of various surfaces including, but not limited to, cells, spheres, and manufactured products subject to difficult-to-see imperfections. The practice of the present invention provides applications that include colloidal hard spheres experiments, biological cell detection for patch clamping, cell movement and tracking, as well as defect identification in products, such as semiconductor devices, where surface damage can be significant, but difficult to detect. The CMIS system is a machine vision system, which combines intelligent image processing with remote control capabilities and provides the ability to auto-focus on a microscope sample, automatically scan an image, and perform machine vision analysis on multiple samples simultaneously.

  7. Operation of a Cartesian Robotic System in a Compact Microscope with Intelligent Controls

    NASA Technical Reports Server (NTRS)

    McDowell, Mark (Inventor)

    2006-01-01

    A Microscope Imaging System (CMIS) with intelligent controls is disclosed that provides techniques for scanning, identifying, detecting and tracking microscopic changes in selected characteristics or features of various surfaces including, but not limited to, cells, spheres, and manufactured products subject to difficult-to-see imperfections. The practice of the present invention provides applications that include colloidal hard spheres experiments, biological cell detection for patch clamping, cell movement and tracking, as well as defect identification in products, such as semiconductor devices, where surface damage can be significant, but difficult to detect. The CMIS system is a machine vision system, which combines intelligent image processing with remote control capabilities and provides the ability to autofocus on a microscope sample, automatically scan an image, and perform machine vision analysis on multiple samples simultaneously.

  8. Machine vision extracted plant movement for early detection of plant water stress.

    PubMed

    Kacira, M; Ling, P P; Short, T H

    2002-01-01

    A methodology was established for early, non-contact, and quantitative detection of plant water stress with machine vision extracted plant features. Top-projected canopy area (TPCA) of the plants was extracted from plant images using image-processing techniques. Water stress induced plant movement was decoupled from plant diurnal movement and plant growth using coefficient of relative variation of TPCA (CRV[TPCA)] and was found to be an effective marker for water stress detection. Threshold value of CRV(TPCA) as an indicator of water stress was determined by a parametric approach. The effectiveness of the sensing technique was evaluated against the timing of stress detection by an operator. Results of this study suggested that plant water stress detection using projected canopy area based features of the plants was feasible.

  9. Local intensity adaptive image coding

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.

    1989-01-01

    The objective of preprocessing for machine vision is to extract intrinsic target properties. The most important properties ordinarily are structure and reflectance. Illumination in space, however, is a significant problem as the extreme range of light intensity, stretching from deep shadow to highly reflective surfaces in direct sunlight, impairs the effectiveness of standard approaches to machine vision. To overcome this critical constraint, an image coding scheme is being investigated which combines local intensity adaptivity, image enhancement, and data compression. It is very effective under the highly variant illumination that can exist within a single frame or field of view, and it is very robust to noise at low illuminations. Some of the theory and salient features of the coding scheme are reviewed. Its performance is characterized in a simulated space application, the research and development activities are described.

  10. High Throughput Multispectral Image Processing with Applications in Food Science.

    PubMed

    Tsakanikas, Panagiotis; Pavlidis, Dimitris; Nychas, George-John

    2015-01-01

    Recently, machine vision is gaining attention in food science as well as in food industry concerning food quality assessment and monitoring. Into the framework of implementation of Process Analytical Technology (PAT) in the food industry, image processing can be used not only in estimation and even prediction of food quality but also in detection of adulteration. Towards these applications on food science, we present here a novel methodology for automated image analysis of several kinds of food products e.g. meat, vanilla crème and table olives, so as to increase objectivity, data reproducibility, low cost information extraction and faster quality assessment, without human intervention. Image processing's outcome will be propagated to the downstream analysis. The developed multispectral image processing method is based on unsupervised machine learning approach (Gaussian Mixture Models) and a novel unsupervised scheme of spectral band selection for segmentation process optimization. Through the evaluation we prove its efficiency and robustness against the currently available semi-manual software, showing that the developed method is a high throughput approach appropriate for massive data extraction from food samples.

  11. Sensory substitution: closing the gap between basic research and widespread practical visual rehabilitation.

    PubMed

    Maidenbaum, Shachar; Abboud, Sami; Amedi, Amir

    2014-04-01

    Sensory substitution devices (SSDs) have come a long way since first developed for visual rehabilitation. They have produced exciting experimental results, and have furthered our understanding of the human brain. Unfortunately, they are still not used for practical visual rehabilitation, and are currently considered as reserved primarily for experiments in controlled settings. Over the past decade, our understanding of the neural mechanisms behind visual restoration has changed as a result of converging evidence, much of which was gathered with SSDs. This evidence suggests that the brain is more than a pure sensory-machine but rather is a highly flexible task-machine, i.e., brain regions can maintain or regain their function in vision even with input from other senses. This complements a recent set of more promising behavioral achievements using SSDs and new promising technologies and tools. All these changes strongly suggest that the time has come to revive the focus on practical visual rehabilitation with SSDs and we chart several key steps in this direction such as training protocols and self-train tools. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  12. Present Vision--Future Vision.

    ERIC Educational Resources Information Center

    Fitterman, L. Jeffrey

    This paper addresses issues of current and future technology use for and by individuals with visual impairments and blindness in Florida. Present technology applications used in vision programs in Florida are individually described, including video enlarging, speech output, large inkprint, braille print, paperless braille, and tactual output…

  13. Application of machine vision to pup loaf bread evaluation

    NASA Astrophysics Data System (ADS)

    Zayas, Inna Y.; Chung, O. K.

    1996-12-01

    Intrinsic end-use quality of hard winter wheat breeding lines is routinely evaluated at the USDA, ARS, USGMRL, Hard Winter Wheat Quality Laboratory. Experimental baking test of pup loaves is the ultimate test for evaluating hard wheat quality. Computer vision was applied to developing an objective methodology for bread quality evaluation for the 1994 and 1995 crop wheat breeding line samples. Computer extracted features for bread crumb grain were studied, using subimages (32 by 32 pixel) and features computed for the slices with different threshold settings. A subsampling grid was located with respect to the axis of symmetry of a slice to provide identical topological subimage information. Different ranking techniques were applied to the databases. Statistical analysis was run on the database with digital image and breadmaking features. Several ranking algorithms and data visualization techniques were employed to create a sensitive scale for porosity patterns of bread crumb. There were significant linear correlations between machine vision extracted features and breadmaking parameters. Crumb grain scores by human experts were correlated more highly with some image features than with breadmaking parameters.

  14. Protyping machine vision software on the World Wide Web

    NASA Astrophysics Data System (ADS)

    Karantalis, George; Batchelor, Bruce G.

    1998-10-01

    Interactive image processing is a proven technique for analyzing industrial vision applications and building prototype systems. Several of the previous implementations have used dedicated hardware to perform the image processing, with a top layer of software providing a convenient user interface. More recently, self-contained software packages have been devised and these run on a standard computer. The advent of the Java programming language has made it possible to write platform-independent software, operating over the Internet, or a company-wide Intranet. Thus, there arises the possibility of designing at least some shop-floor inspection/control systems, without the vision engineer ever entering the factories where they will be used. It successful, this project will have a major impact on the productivity of vision systems designers.

  15. Night vision and electro-optics technology transfer, 1972 - 1981

    NASA Astrophysics Data System (ADS)

    Fulton, R. W.; Mason, G. F.

    1981-09-01

    The purpose of this special report, 'Night Vision and Electro-Optics Technology Transfer 1972-1981,' is threefold: To illustrate, through actual case histories, the potential for exploiting a highly developed and available military technology for solving non-military problems. To provide, in a layman's language, the principles behind night vision and electro-optical devices in order that an awareness may be developed relative to the potential for adopting this technology for non-military applications. To obtain maximum dollar return from research and development investments by applying this technology to secondary applications. This includes, but is not limited to, applications by other Government agencies, state and local governments, colleges and universities, and medical organizations. It is desired that this summary of Technology Transfer activities within Night Vision and Electro-Optics Laboratory (NV/EOL) will benefit those who desire to explore one of the vast technological resources available within the Defense Department and the Federal Government.

  16. GSFC Information Systems Technology Developments Supporting the Vision for Space Exploration

    NASA Technical Reports Server (NTRS)

    Hughes, Peter; Dennehy, Cornelius; Mosier, Gary; Smith, Dan; Rykowski, Lisa

    2004-01-01

    The Vision for Space Exploration will guide NASA's future human and robotic space activities. The broad range of human and robotic missions now being planned will require the development of new system-level capabilities enabled by emerging new technologies. Goddard Space Flight Center is actively supporting the Vision for Space Exploration in a number of program management, engineering and technology areas. This paper provides a brief background on the Vision for Space Exploration and a general overview of potential key Goddard contributions. In particular, this paper focuses on describing relevant GSFC information systems capabilities in architecture development; interoperable command, control and communications; and other applied information systems technology/research activities that are applicable to support the Vision for Space Exploration goals. Current GSFC development efforts and task activities are presented together with future plans.

  17. Visions of Change: Information Technology, Education and Postmodernism.

    ERIC Educational Resources Information Center

    Conlon, Tom

    2000-01-01

    Encourages visionary questions relating to information technology and education. Describes the context of postmodernist change and discusses two contrasting visions of how education could change, paternalism and libertarianism. Concludes that teachers, learners, and communities need to articulate their own visions of education to ensure a…

  18. An automated machine vision system for the histological grading of cervical intraepithelial neoplasia (CIN).

    PubMed

    Keenan, S J; Diamond, J; McCluggage, W G; Bharucha, H; Thompson, D; Bartels, P H; Hamilton, P W

    2000-11-01

    The histological grading of cervical intraepithelial neoplasia (CIN) remains subjective, resulting in inter- and intra-observer variation and poor reproducibility in the grading of cervical lesions. This study has attempted to develop an objective grading system using automated machine vision. The architectural features of cervical squamous epithelium are quantitatively analysed using a combination of computerized digital image processing and Delaunay triangulation analysis; 230 images digitally captured from cases previously classified by a gynaecological pathologist included normal cervical squamous epithelium (n=30), koilocytosis (n=46), CIN 1 (n=52), CIN 2 (n=56), and CIN 3 (n=46). Intra- and inter-observer variation had kappa values of 0.502 and 0.415, respectively. A machine vision system was developed in KS400 macro programming language to segment and mark the centres of all nuclei within the epithelium. By object-oriented analysis of image components, the positional information of nuclei was used to construct a Delaunay triangulation mesh. Each mesh was analysed to compute triangle dimensions including the mean triangle area, the mean triangle edge length, and the number of triangles per unit area, giving an individual quantitative profile of measurements for each case. Discriminant analysis of the geometric data revealed the significant discriminatory variables from which a classification score was derived. The scoring system distinguished between normal and CIN 3 in 98.7% of cases and between koilocytosis and CIN 1 in 76.5% of cases, but only 62.3% of the CIN cases were classified into the correct group, with the CIN 2 group showing the highest rate of misclassification. Graphical plots of triangulation data demonstrated the continuum of morphological change from normal squamous epithelium to the highest grade of CIN, with overlapping of the groups originally defined by the pathologists. This study shows that automated location of nuclei in cervical biopsies using computerized image analysis is possible. Analysis of positional information enables quantitative evaluation of architectural features in CIN using Delaunay triangulation meshes, which is effective in the objective classification of CIN. This demonstrates the future potential of automated machine vision systems in diagnostic histopathology. Copyright 2000 John Wiley & Sons, Ltd.

  19. [Spectral navigation technology and its application in positioning the fruits of fruit trees].

    PubMed

    Yu, Xiao-Lei; Zhao, Zhi-Min

    2010-03-01

    An innovative technology of spectral navigation is presented in the present paper. This new method adopts reflectance spectra of fruits, leaves and branches as one of the key navigation parameters and positions the fruits of fruit trees relying on the diversity of spectral characteristics. The research results show that the distinct smoothness as effect is available in the spectrum of leaves of fruit trees. On the other hand, gradual increasing as the trend is an important feature in the spectrum of branches of fruit trees while the spectrum of fruit fluctuates. In addition, the peak diversity of reflectance rate between fruits and leaves of fruit trees is reached at 850 nm of wavelength. So the limit value can be designed at this wavelength in order to distinguish fruits and leaves. The method introduced here can not only quickly distinguish fruits, leaves and branches, but also avoid the effects of surroundings. Compared with the traditional navigation systems based on machine vision, there are still some special and unique features in the field of positioning the fruits of fruit trees using spectral navigation technology.

  20. Strategic Computing Computer Vision: Taking Image Understanding To The Next Plateau

    NASA Astrophysics Data System (ADS)

    Simpson, R. L., Jr.

    1987-06-01

    The overall objective of the Strategic Computing (SC) Program of the Defense Advanced Research Projects Agency (DARPA) is to develop and demonstrate a new generation of machine intelligence technology which can form the basis for more capable military systems in the future and also maintain a position of world leadership for the US in computer technology. Begun in 1983, SC represents a focused research strategy for accelerating the evolution of new technology and its rapid prototyping in realistic military contexts. Among the very ambitious demonstration prototypes being developed within the SC Program are: 1) the Pilot's Associate which will aid the pilot in route planning, aerial target prioritization, evasion of missile threats, and aircraft emergency safety procedures during flight; 2) two battle management projects one for the for the Army, which is just getting started, called the AirLand Battle Management program (ALBM) which will use knowledge-based systems technology to assist in the generation and evaluation of tactical options and plans at the Corps level; 3) the other more established program for the Navy is the Fleet Command Center Battle Management Program (FCCBIVIP) at Pearl Harbor. The FCCBMP is employing knowledge-based systems and natural language technology in a evolutionary testbed situated in an operational command center to demonstrate and evaluate intelligent decision-aids which can assist in the evaluation of fleet readiness and explore alternatives during contingencies; and 4) the Autonomous Land Vehicle (ALV) which integrates in a major robotic testbed the technologies for dynamic image understanding, knowledge-based route planning with replanning during execution, hosted on new advanced parallel architectures. The goal of the Strategic Computing computer vision technology base (SCVision) is to develop generic technology that will enable the construction of complete, robust, high performance image understanding systems to support a wide range of DoD applications. Possible applications include autonomous vehicle navigation, photointerpretation, smart weapons, and robotic manipulation. This paper provides an overview of the technical and program management plans being used in evolving this critical national technology.

  1. Fusion of Multiple Sensing Modalities for Machine Vision

    DTIC Science & Technology

    1994-05-31

    Modeling of Non-Homogeneous 3-D Objects for Thermal and Visual Image Synthesis," Pattern Recognition, in press. U [11] Nair, Dinesh , and J. K. Aggarwal...20th AIPR Workshop: Computer Vision--Meeting the Challenges, McLean, Virginia, October 1991. Nair, Dinesh , and J. K. Aggarwal, "An Object Recognition...Computer Engineering August 1992 Sunil Gupta Ph.D. Student Mohan Kumar M.S. Student Sandeep Kumar M.S. Student Xavier Lebegue Ph.D., Computer

  2. The research of knitting needle status monitoring setup

    NASA Astrophysics Data System (ADS)

    Liu, Lu; Liao, Xiao-qing; Zhu, Yong-kang; Yang, Wei; Zhang, Pei; Zhao, Yong-kai; Huang, Hui-jie

    2013-09-01

    In textile production, quality control and testing is the key to ensure the process and improve the efficiency. Defect of the knitting needles is the main factor affecting the quality of the appearance of textiles. Defect detection method based on machine vision and image processing technology is universal. This approach does not effectively identify the defect generated by damaged knitting needles and raise the alarm. We developed a knitting needle status monitoring setup using optical imaging, photoelectric detection and weak signal processing technology to achieve real-time monitoring of weaving needles' position. Depending on the shape of the knitting needle, we designed a kind of Glass Optical Fiber (GOF) light guides with a rectangular port used for transmission of the signal light. To be able to capture the signal of knitting needles accurately, we adopt a optical 4F system which has better imaging quality and simple structure and there is a rectangle image on the focal plane after the system. When a knitting needle passes through position of the rectangle image, the reflected light from needle surface will back to the GOF light guides along the same optical system. According to the intensity of signals, the computer control unit distinguish that the knitting needle is broken or curving. The experimental results show that this system can accurately detect the broken needles and the curving needles on the knitting machine in operating condition.

  3. UAV Research at NASA Langley: Towards Safe, Reliable, and Autonomous Operations

    NASA Technical Reports Server (NTRS)

    Davila, Carlos G.

    2016-01-01

    Unmanned Aerial Vehicles (UAV) are fundamental components in several aspects of research at NASA Langley, such as flight dynamics, mission-driven airframe design, airspace integration demonstrations, atmospheric science projects, and more. In particular, NASA Langley Research Center (Langley) is using UAVs to develop and demonstrate innovative capabilities that meet the autonomy and robotics challenges that are anticipated in science, space exploration, and aeronautics. These capabilities will enable new NASA missions such as asteroid rendezvous and retrieval (ARRM), Mars exploration, in-situ resource utilization (ISRU), pollution measurements in historically inaccessible areas, and the integration of UAVs into our everyday lives all missions of increasing complexity, distance, pace, and/or accessibility. Building on decades of NASA experience and success in the design, fabrication, and integration of robust and reliable automated systems for space and aeronautics, Langley Autonomy Incubator seeks to bridge the gap between automation and autonomy by enabling safe autonomous operations via onboard sensing and perception systems in both data-rich and data-deprived environments. The Autonomy Incubator is focused on the challenge of mobility and manipulation in dynamic and unstructured environments by integrating technologies such as computer vision, visual odometry, real-time mapping, path planning, object detection and avoidance, object classification, adaptive control, sensor fusion, machine learning, and natural human-machine teaming. These technologies are implemented in an architectural framework developed in-house for easy integration and interoperability of cutting-edge hardware and software.

  4. Haptic Exploration in Humans and Machines: Attribute Integration and Machine Recognition/Implementation.

    DTIC Science & Technology

    1988-04-30

    side it necessary and Identify’ by’ block n~nmbot) haptic hand, touch , vision, robot, object recognition, categorization 20. AGSTRPACT (Continue an...established that the haptic system has remarkable capabilities for object recognition. We define haptics as purposive touch . The basic tactual system...gathered ratings of the importance of dimensions for categorizing common objects by touch . Texture and hardness ratings strongly co-vary, which is

  5. (Computer) Vision without Sight

    PubMed Central

    Manduchi, Roberto; Coughlan, James

    2012-01-01

    Computer vision holds great promise for helping persons with blindness or visual impairments (VI) to interpret and explore the visual world. To this end, it is worthwhile to assess the situation critically by understanding the actual needs of the VI population and which of these needs might be addressed by computer vision. This article reviews the types of assistive technology application areas that have already been developed for VI, and the possible roles that computer vision can play in facilitating these applications. We discuss how appropriate user interfaces are designed to translate the output of computer vision algorithms into information that the user can quickly and safely act upon, and how system-level characteristics affect the overall usability of an assistive technology. Finally, we conclude by highlighting a few novel and intriguing areas of application of computer vision to assistive technology. PMID:22815563

  6. Cutting tool form compensation system and method

    DOEpatents

    Barkman, W.E.; Babelay, E.F. Jr.; Klages, E.J.

    1993-10-19

    A compensation system for a computer-controlled machining apparatus having a controller and including a cutting tool and a workpiece holder which are movable relative to one another along a preprogrammed path during a machining operation utilizes a camera and a vision computer for gathering information at a preselected stage of a machining operation relating to the actual shape and size of the cutting edge of the cutting tool and for altering the preprogrammed path in accordance with detected variations between the actual size and shape of the cutting edge and an assumed size and shape of the cutting edge. The camera obtains an image of the cutting tool against a background so that the cutting tool and background possess contrasting light intensities, and the vision computer utilizes the contrasting light intensities of the image to locate points therein which correspond to points along the actual cutting edge. Following a series of computations involving the determining of a tool center from the points identified along the tool edge, the results of the computations are fed to the controller where the preprogrammed path is altered as aforedescribed. 9 figures.

  7. Cutting tool form compensaton system and method

    DOEpatents

    Barkman, William E.; Babelay, Jr., Edwin F.; Klages, Edward J.

    1993-01-01

    A compensation system for a computer-controlled machining apparatus having a controller and including a cutting tool and a workpiece holder which are movable relative to one another along a preprogrammed path during a machining operation utilizes a camera and a vision computer for gathering information at a preselected stage of a machining operation relating to the actual shape and size of the cutting edge of the cutting tool and for altering the preprogrammed path in accordance with detected variations between the actual size and shape of the cutting edge and an assumed size and shape of the cutting edge. The camera obtains an image of the cutting tool against a background so that the cutting tool and background possess contrasting light intensities, and the vision computer utilizes the contrasting light intensities of the image to locate points therein which correspond to points along the actual cutting edge. Following a series of computations involving the determining of a tool center from the points identified along the tool edge, the results of the computations are fed to the controller where the preprogrammed path is altered as aforedescribed.

  8. Galaxy morphology - An unsupervised machine learning approach

    NASA Astrophysics Data System (ADS)

    Schutter, A.; Shamir, L.

    2015-09-01

    Structural properties poses valuable information about the formation and evolution of galaxies, and are important for understanding the past, present, and future universe. Here we use unsupervised machine learning methodology to analyze a network of similarities between galaxy morphological types, and automatically deduce a morphological sequence of galaxies. Application of the method to the EFIGI catalog show that the morphological scheme produced by the algorithm is largely in agreement with the De Vaucouleurs system, demonstrating the ability of computer vision and machine learning methods to automatically profile galaxy morphological sequences. The unsupervised analysis method is based on comprehensive computer vision techniques that compute the visual similarities between the different morphological types. Rather than relying on human cognition, the proposed system deduces the similarities between sets of galaxy images in an automatic manner, and is therefore not limited by the number of galaxies being analyzed. The source code of the method is publicly available, and the protocol of the experiment is included in the paper so that the experiment can be replicated, and the method can be used to analyze user-defined datasets of galaxy images.

  9. The Cognition and Neuroergonomics (CaN) Collaborative Technology Alliance (CTA): Scientific Vision, Approach, and Translational Paths

    DTIC Science & Technology

    2012-09-01

    The Cognition and Neuroergonomics (CaN) Collaborative Technology Alliance (CTA): Scientific Vision, Approach, and Translational Paths by...The Cognition and Neuroergonomics (CaN) Collaborative Technology Alliance (CTA): Scientific Vision, Approach, and Translational Paths Kelvin S. Oie...REPORT DATE (DD-MM-YYYY) September 2012 2. REPORT TYPE Final 3. DATES COVERED (From - To) 4. TITLE AND SUBTITLE The Cognition and Neuroergonomics

  10. Technology of machine tools. Volume 4. Machine tool controls

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1980-10-01

    The Machine Tool Task Force (MTTF) was formed to characterize the state of the art of machine tool technology and to identify promising future directions of this technology. This volume is one of a five-volume series that presents the MTTF findings; reports on various areas of the technology were contributed by experts in those areas.

  11. Technology of machine tools. Volume 3. Machine tool mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tlusty, J.

    1980-10-01

    The Machine Tool Task Force (MTTF) was formed to characterize the state of the art of machine tool technology and to identify promising future directions of this technology. This volume is one of a five-volume series that presents the MTTF findings; reports on various areas of the technology were contributed by experts in those areas.

  12. Technology of machine tools. Volume 5. Machine tool accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hocken, R.J.

    1980-10-01

    The Machine Tool Task Force (MTTF) was formed to characterize the state of the art of machine tool technology and to identify promising future directions of this technology. This volume is one of a five-volume series that presents the MTTF findings; reports on various areas of the technology were contributed by experts in those areas.

  13. The study of stereo vision technique for the autonomous vehicle

    NASA Astrophysics Data System (ADS)

    Li, Pei; Wang, Xi; Wang, Jiang-feng

    2015-08-01

    The stereo vision technology by two or more cameras could recovery 3D information of the field of view. This technology can effectively help the autonomous navigation system of unmanned vehicle to judge the pavement conditions within the field of view, and to measure the obstacles on the road. In this paper, the stereo vision technology in measuring the avoidance of the autonomous vehicle is studied and the key techniques are analyzed and discussed. The system hardware of the system is built and the software is debugged, and finally the measurement effect is explained by the measured data. Experiments show that the 3D reconstruction, within the field of view, can be rebuilt by the stereo vision technology effectively, and provide the basis for pavement condition judgment. Compared with unmanned vehicle navigation radar used in measuring system, the stereo vision system has the advantages of low cost, distance and so on, it has a good application prospect.

  14. Machine vision inspection of railroad track

    DOT National Transportation Integrated Search

    2011-01-10

    North American Railways and the United States Department of Transportation : (US DOT) Federal Railroad Administration (FRA) require periodic inspection of railway : infrastructure to ensure the safety of railway operation. This inspection is a critic...

  15. INFIBRA: machine vision inspection of acrylic fiber production

    NASA Astrophysics Data System (ADS)

    Davies, Roger; Correia, Bento A. B.; Contreiras, Jose; Carvalho, Fernando D.

    1998-10-01

    This paper describes the implementation of INFIBRA, a machine vision system for the inspection of acrylic fiber production lines. The system was developed by INETI under a contract from Fisipe, Fibras Sinteticas de Portugal, S.A. At Fisipe there are ten production lines in continuous operation, each approximately 40 m in length. A team of operators used to perform periodic manual visual inspection of each line in conditions of high ambient temperature and humidity. It is not surprising that failures in the manual inspection process occurred with some frequency, with consequences that ranged from reduced fiber quality to production stoppages. The INFIBRA system architecture is a specialization of a generic, modular machine vision architecture based on a network of Personal Computers (PCs), each equipped with a low cost frame grabber. Each production line has a dedicated PC that performs automatic inspection, using specially designed metrology algorithms, via four video cameras located at key positions on the line. The cameras are mounted inside custom-built, hermetically sealed water-cooled housings to protect them from the unfriendly environment. The ten PCs, one for each production line, communicate with a central PC via a standard Ethernet connection. The operator controls all aspects of the inspection process, from configuration through to handling alarms, via a simple graphical interface on the central PC. At any time the operator can also view on the central PC's screen the live image from any one of the 40 cameras employed by the system.

  16. Feature extraction algorithm for space targets based on fractal theory

    NASA Astrophysics Data System (ADS)

    Tian, Balin; Yuan, Jianping; Yue, Xiaokui; Ning, Xin

    2007-11-01

    In order to offer a potential for extending the life of satellites and reducing the launch and operating costs, satellite servicing including conducting repairs, upgrading and refueling spacecraft on-orbit become much more frequently. Future space operations can be more economically and reliably executed using machine vision systems, which can meet real time and tracking reliability requirements for image tracking of space surveillance system. Machine vision was applied to the research of relative pose for spacecrafts, the feature extraction algorithm was the basis of relative pose. In this paper fractal geometry based edge extraction algorithm which can be used in determining and tracking the relative pose of an observed satellite during proximity operations in machine vision system was presented. The method gets the gray-level image distributed by fractal dimension used the Differential Box-Counting (DBC) approach of the fractal theory to restrain the noise. After this, we detect the consecutive edge using Mathematical Morphology. The validity of the proposed method is examined by processing and analyzing images of space targets. The edge extraction method not only extracts the outline of the target, but also keeps the inner details. Meanwhile, edge extraction is only processed in moving area to reduce computation greatly. Simulation results compared edge detection using the method which presented by us with other detection methods. The results indicate that the presented algorithm is a valid method to solve the problems of relative pose for spacecrafts.

  17. Self-powered vision electronic-skin basing on piezo-photodetecting Ppy/PVDF pixel-patterned matrix for mimicking vision.

    PubMed

    Han, Wuxiao; Zhang, Linlin; He, Haoxuan; Liu, Hongmin; Xing, Lili; Xue, Xinyu

    2018-06-22

    The development of multifunctional electronic-skin that establishes human-machine interfaces, enhances perception abilities or has other distinct biomedical applications is the key to the realization of artificial intelligence. In this paper, a new self-powered (battery-free) flexible vision electronic-skin has been realized from pixel-patterned matrix of piezo-photodetecting PVDF/Ppy film. The electronic-skin under applied deformation can actively output piezoelectric voltage, and the outputting signal can be significantly influenced by UV illumination. The piezoelectric output can act as both the photodetecting signal and electricity power. The reliability is demonstrated over 200 light on-off cycles. The sensing unit matrix of 6 × 6 pixels on the electronic-skin can realize image recognition through mapping multi-point UV stimuli. This self-powered vision electronic-skin that simply mimics human retina may have potential application in vision substitution.

  18. Self-powered vision electronic-skin basing on piezo-photodetecting Ppy/PVDF pixel-patterned matrix for mimicking vision

    NASA Astrophysics Data System (ADS)

    Han, Wuxiao; Zhang, Linlin; He, Haoxuan; Liu, Hongmin; Xing, Lili; Xue, Xinyu

    2018-06-01

    The development of multifunctional electronic-skin that establishes human-machine interfaces, enhances perception abilities or has other distinct biomedical applications is the key to the realization of artificial intelligence. In this paper, a new self-powered (battery-free) flexible vision electronic-skin has been realized from pixel-patterned matrix of piezo-photodetecting PVDF/Ppy film. The electronic-skin under applied deformation can actively output piezoelectric voltage, and the outputting signal can be significantly influenced by UV illumination. The piezoelectric output can act as both the photodetecting signal and electricity power. The reliability is demonstrated over 200 light on–off cycles. The sensing unit matrix of 6 × 6 pixels on the electronic-skin can realize image recognition through mapping multi-point UV stimuli. This self-powered vision electronic-skin that simply mimics human retina may have potential application in vision substitution.

  19. Narrowband light detection via internal quantum efficiency manipulation of organic photodiodes

    NASA Astrophysics Data System (ADS)

    Armin, Ardalan; Jansen-van Vuuren, Ross D.; Kopidakis, Nikos; Burn, Paul L.; Meredith, Paul

    2015-02-01

    Spectrally selective light detection is vital for full-colour and near-infrared (NIR) imaging and machine vision. This is not possible with traditional broadband-absorbing inorganic semiconductors without input filtering, and is yet to be achieved for narrowband absorbing organic semiconductors. We demonstrate the first sub-100 nm full-width-at-half-maximum visible-blind red and NIR photodetectors with state-of-the-art performance across critical response metrics. These devices are based on organic photodiodes with optically thick junctions. Paradoxically, we use broadband-absorbing organic semiconductors and utilize the electro-optical properties of the junction to create the narrowest NIR-band photoresponses yet demonstrated. In this context, these photodiodes outperform the encumbent technology (input filtered inorganic semiconductor diodes) and emerging technologies such as narrow absorber organic semiconductors or quantum nanocrystals. The design concept allows for response tuning and is generic for other spectral windows. Furthermore, it is material-agnostic and applicable to other disordered and polycrystalline semiconductors.

  20. Watch your step! A frustrated total internal reflection approach to forensic footwear imaging

    NASA Astrophysics Data System (ADS)

    Needham, J. A.; Sharp, J. S.

    2016-02-01

    Forensic image retrieval and processing are vital tools in the fight against crime e.g. during fingerprint capture. However, despite recent advances in machine vision technology and image processing techniques (and contrary to the claims of popular fiction) forensic image retrieval is still widely being performed using outdated practices involving inkpads and paper. Ongoing changes in government policy, increasing crime rates and the reduction of forensic service budgets increasingly require that evidence be gathered and processed more rapidly and efficiently. A consequence of this is that new, low-cost imaging technologies are required to simultaneously increase the quality and throughput of the processing of evidence. This is particularly true in the burgeoning field of forensic footwear analysis, where images of shoe prints are being used to link individuals to crime scenes. Here we describe one such approach based upon frustrated total internal reflection imaging that can be used to acquire images of regions where shoes contact rigid surfaces.

  1. Autonomy enables new science missions

    NASA Astrophysics Data System (ADS)

    Doyle, Richard J.; Gor, Victoria; Man, Guy K.; Stolorz, Paul E.; Chapman, Clark; Merline, William J.; Stern, Alan

    1997-01-01

    The challenge of space flight in NASA's future is to enable smaller, more frequent and intensive space exploration at much lower total cost without substantially decreasing mission reliability, capability, or the scientific return on investment. The most effective way to achieve this goal is to build intelligent capabilities into the spacecraft themselves. Our technological vision for meeting the challenge of returning quality science through limited communication bandwidth will actually put scientists in a more direct link with the spacecraft than they have enjoyed to date. Technologies such as pattern recognition and machine learning can place a part of the scientist's awareness onboard the spacecraft to prioritize downlink or to autonomously trigger time-critical follow-up observations-particularly important in flyby missions-without ground interaction. Onboard knowledge discovery methods can be used to include candidate discoveries in each downlink for scientists' scrutiny. Such capabilities will allow scientists to quickly reprioritize missions in a much more intimate and efficient manner than is possible today. Ultimately, new classes of exploration missions will be enabled.

  2. Watch your step! A frustrated total internal reflection approach to forensic footwear imaging.

    PubMed

    Needham, J A; Sharp, J S

    2016-02-16

    Forensic image retrieval and processing are vital tools in the fight against crime e.g. during fingerprint capture. However, despite recent advances in machine vision technology and image processing techniques (and contrary to the claims of popular fiction) forensic image retrieval is still widely being performed using outdated practices involving inkpads and paper. Ongoing changes in government policy, increasing crime rates and the reduction of forensic service budgets increasingly require that evidence be gathered and processed more rapidly and efficiently. A consequence of this is that new, low-cost imaging technologies are required to simultaneously increase the quality and throughput of the processing of evidence. This is particularly true in the burgeoning field of forensic footwear analysis, where images of shoe prints are being used to link individuals to crime scenes. Here we describe one such approach based upon frustrated total internal reflection imaging that can be used to acquire images of regions where shoes contact rigid surfaces.

  3. Narrowband Light Detection via Internal Quantum Efficiency Manipulation of Organic Photodiodes

    DOE PAGES

    Armin, A.; Jansen-van Vuuren, R. D.; Kopidakis, N.; ...

    2015-02-01

    Spectrally selective light detection is vital for full-colour and near-infrared (NIR) imaging and machine vision. This is not possible with traditional broadband-absorbing inorganic semiconductors without input filtering, and is yet to be achieved for narrowband absorbing organic semiconductors. We demonstrate the first sub-100 nm full-width-at-half-maximum visible-blind red and NIR photodetectors with state-of-the-art performance across critical response metrics. These devices are based on organic photodiodes with optically thick junctions. Paradoxically, we use broadband-absorbing organic semiconductors and utilize the electro-optical properties of the junction to create the narrowest NIR-band photoresponses yet demonstrated. In this context, these photodiodes outperform the encumbent technology (inputmore » filtered inorganic semiconductor diodes) and emerging technologies such as narrow absorber organic semiconductors or quantum nanocrystals. The design concept allows for response tuning and is generic for other spectral windows. Furthermore, it is materialagnostic and applicable to other disordered and polycrystalline semiconductors.« less

  4. Fall Down Detection Under Smart Home System.

    PubMed

    Juang, Li-Hong; Wu, Ming-Ni

    2015-10-01

    Medical technology makes an inevitable trend for the elderly population, therefore the intelligent home care is an important direction for science and technology development, in particular, elderly in-home safety management issues become more and more important. In this research, a low of operation algorithm and using the triangular pattern rule are proposed, then can quickly detect fall-down movements of humanoid by the installation of a robot with camera vision at home that will be able to judge the fall-down movements of in-home elderly people in real time. In this paper, it will present a preliminary design and experimental results of fall-down movements from body posture that utilizes image pre-processing and three triangular-mass-central points to extract the characteristics. The result shows that the proposed method would adopt some characteristic value and the accuracy can reach up to 90 % for a single character posture. Furthermore the accuracy can be up to 100 % when a continuous-time sampling criterion and support vector machine (SVM) classifier are used.

  5. The NASA Program Management Tool: A New Vision in Business Intelligence

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Swanson, Keith; Putz, Peter; Bell, David G.; Gawdiak, Yuri

    2006-01-01

    This paper describes a novel approach to business intelligence and program management for large technology enterprises like the U.S. National Aeronautics and Space Administration (NASA). Two key distinctions of the approach are that 1) standard business documents are the user interface, and 2) a "schema-less" XML database enables flexible integration of technology information for use by both humans and machines in a highly dynamic environment. The implementation utilizes patent-pending NASA software called the NASA Program Management Tool (PMT) and its underlying "schema-less" XML database called Netmark. Initial benefits of PMT include elimination of discrepancies between business documents that use the same information and "paperwork reduction" for program and project management in the form of reducing the effort required to understand standard reporting requirements and to comply with those reporting requirements. We project that the underlying approach to business intelligence will enable significant benefits in the timeliness, integrity and depth of business information available to decision makers on all organizational levels.

  6. 2005 AG20/20 Annual Review

    NASA Technical Reports Server (NTRS)

    Ross, Kenton W.; McKellip, Rodney D.

    2005-01-01

    Topics covered include: Implementation and Validation of Sensor-Based Site-Specific Crop Management; Enhanced Management of Agricultural Perennial Systems (EMAPS) Using GIS and Remote Sensing; Validation and Application of Geospatial Information for Early Identification of Stress in Wheat; Adapting and Validating Precision Technologies for Cotton Production in the Mid-Southern United States - 2004 Progress Report; Development of a System to Automatically Geo-Rectify Images; Economics of Precision Agriculture Technologies in Cotton Production-AG 2020 Prescription Farming Automation Algorithms; Field Testing a Sensor-Based Applicator for Nitrogen and Phosphorus Application; Early Detection of Citrus Diseases Using Machine Vision and DGPS; Remote Sensing of Citrus Tree Stress Levels and Factors; Spectral-based Nitrogen Sensing for Citrus; Characterization of Tree Canopies; In-field Sensing of Shallow Water Tables and Hydromorphic Soils with an Electromagnetic Induction Profiler; Maintaining the Competitiveness of Tree Fruit Production Through Precision Agriculture; Modeling and Visualizing Terrain and Remote Sensing Data for Research and Education in Precision Agriculture; Thematic Soil Mapping and Crop-Based Strategies for Site-Specific Management; and Crop-Based Strategies for Site-Specific Management.

  7. Humans and machines in space: The vision, the challenge, the payoff; Proceedings of the 29th Goddard Memorial Symposium, Washington, Mar. 14, 15, 1991

    NASA Astrophysics Data System (ADS)

    Johnson, Bradley; May, Gayle L.; Korn, Paula

    The present conference discusses the currently envisioned goals of human-machine systems in spacecraft environments, prospects for human exploration of the solar system, and plausible methods for meeting human needs in space. Also discussed are the problems of human-machine interaction in long-duration space flights, remote medical systems for space exploration, the use of virtual reality for planetary exploration, the alliance between U.S. Antarctic and space programs, and the economic and educational impacts of the U.S. space program.

  8. Technology of machine tools. Volume 2. Machine tool systems management and utilization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomson, A.R.

    1980-10-01

    The Machine Tool Task Force (MTTF) was formed to characterize the state of the art of machine tool technology and to identify promising future directions of this technology. This volume is one of a five-volume series that presents the MTTF findings; reports on various areas of the technology were contributed by experts in those areas.

  9. Fast and robust generation of feature maps for region-based visual attention.

    PubMed

    Aziz, Muhammad Zaheer; Mertsching, Bärbel

    2008-05-01

    Visual attention is one of the important phenomena in biological vision which can be followed to achieve more efficiency, intelligence, and robustness in artificial vision systems. This paper investigates a region-based approach that performs pixel clustering prior to the processes of attention in contrast to late clustering as done by contemporary methods. The foundation steps of feature map construction for the region-based attention model are proposed here. The color contrast map is generated based upon the extended findings from the color theory, the symmetry map is constructed using a novel scanning-based method, and a new algorithm is proposed to compute a size contrast map as a formal feature channel. Eccentricity and orientation are computed using the moments of obtained regions and then saliency is evaluated using the rarity criteria. The efficient design of the proposed algorithms allows incorporating five feature channels while maintaining a processing rate of multiple frames per second. Another salient advantage over the existing techniques is the reusability of the salient regions in the high-level machine vision procedures due to preservation of their shapes and precise locations. The results indicate that the proposed model has the potential to efficiently integrate the phenomenon of attention into the main stream of machine vision and systems with restricted computing resources such as mobile robots can benefit from its advantages.

  10. Development of a machine vision system for automated structural assembly

    NASA Technical Reports Server (NTRS)

    Sydow, P. Daniel; Cooper, Eric G.

    1992-01-01

    Research is being conducted at the LaRC to develop a telerobotic assembly system designed to construct large space truss structures. This research program was initiated within the past several years, and a ground-based test-bed was developed to evaluate and expand the state of the art. Test-bed operations currently use predetermined ('taught') points for truss structural assembly. Total dependence on the use of taught points for joint receptacle capture and strut installation is neither robust nor reliable enough for space operations. Therefore, a machine vision sensor guidance system is being developed to locate and guide the robot to a passive target mounted on the truss joint receptacle. The vision system hardware includes a miniature video camera, passive targets mounted on the joint receptacles, target illumination hardware, and an image processing system. Discrimination of the target from background clutter is accomplished through standard digital processing techniques. Once the target is identified, a pose estimation algorithm is invoked to determine the location, in three-dimensional space, of the target relative to the robots end-effector. Preliminary test results of the vision system in the Automated Structural Assembly Laboratory with a range of lighting and background conditions indicate that it is fully capable of successfully identifying joint receptacle targets throughout the required operational range. Controlled optical bench test results indicate that the system can also provide the pose estimation accuracy to define the target position.

  11. The potential and prospects of proximal remote sensing of arthropod pests.

    PubMed

    Nansen, Christian

    2016-04-01

    Bench-top or proximal remote sensing applications are widely used as part of quality control and machine vision systems in commercial operations. In addition, these technologies are becoming increasingly important in insect systematics and studies of insect physiology and pest management. This paper provides a review and discussion of how proximal remote sensing may contribute valuable quantitative information regarding identification of species, assessment of insect responses to insecticides, insect host responses to parasitoids and performance of biological control agents. The future role of proximal remote sensing is discussed as an exciting path for novel paths of multidisciplinary research among entomologists and scientists from a wide range of other disciplines, including image processing engineers, medical engineers, research pharmacists and computer scientists. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry.

  12. Technology for NASA's Planetary Science Vision 2050.

    NASA Technical Reports Server (NTRS)

    Lakew, B.; Amato, D.; Freeman, A.; Falker, J.; Turtle, Elizabeth; Green, J.; Mackwell, S.; Daou, D.

    2017-01-01

    NASAs Planetary Science Division (PSD) initiated and sponsored a very successful community Workshop held from Feb. 27 to Mar. 1, 2017 at NASA Headquarters. The purpose of the Workshop was to develop a vision of planetary science research and exploration for the next three decades until 2050. This abstract summarizes some of the salient technology needs discussed during the three-day workshop and at a technology panel on the final day. It is not meant to be a final report on technology to achieve the science vision for 2050.

  13. Operator vision aids for space teleoperation assembly and servicing

    NASA Technical Reports Server (NTRS)

    Brooks, Thurston L.; Ince, Ilhan; Lee, Greg

    1992-01-01

    This paper investigates concepts for visual operator aids required for effective telerobotic control. Operator visual aids, as defined here, mean any operational enhancement that improves man-machine control through the visual system. These concepts were derived as part of a study of vision issues for space teleoperation. Extensive literature on teleoperation, robotics, and human factors was surveyed to definitively specify appropriate requirements. This paper presents these visual aids in three general categories of camera/lighting functions, display enhancements, and operator cues. In the area of camera/lighting functions concepts are discussed for: (1) automatic end effector or task tracking; (2) novel camera designs; (3) computer-generated virtual camera views; (4) computer assisted camera/lighting placement; and (5) voice control. In the technology area of display aids, concepts are presented for: (1) zone displays, such as imminent collision or indexing limits; (2) predictive displays for temporal and spatial location; (3) stimulus-response reconciliation displays; (4) graphical display of depth cues such as 2-D symbolic depth, virtual views, and perspective depth; and (5) view enhancements through image processing and symbolic representations. Finally, operator visual cues (e.g., targets) that help identify size, distance, shape, orientation and location are discussed.

  14. Flight Test Evaluation of Situation Awareness Benefits of Integrated Synthetic Vision System Technology f or Commercial Aircraft

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, Jarvis J., III

    2005-01-01

    Research was conducted onboard a Gulfstream G-V aircraft to evaluate integrated Synthetic Vision System concepts during flight tests over a 6-week period at the Wallops Flight Facility and Reno/Tahoe International Airport. The NASA Synthetic Vision System incorporates database integrity monitoring, runway incursion prevention alerting, surface maps, enhanced vision sensors, and advanced pathway guidance and synthetic terrain presentation. The paper details the goals and objectives of the flight test with a focus on the situation awareness benefits of integrating synthetic vision system enabling technologies for commercial aircraft.

  15. Beyond the Foundations: The Role of Vision and Belief in Teachers' Preparation for Integration of Technology.

    ERIC Educational Resources Information Center

    Albion, Peter R.; Ertmer, Peggy A.

    2002-01-01

    Discussion of the successful adoption and use of information technology in education focuses on teacher's personal philosophical beliefs and how they influence the successful integration of technology. Highlights include beliefs and teacher behavior; changing teachers' beliefs; and using technology to affect change in teachers' visions and…

  16. The Clear Creek Envirohydrologic Observatory: From Vision Toward Reality

    NASA Astrophysics Data System (ADS)

    Just, C.; Muste, M.; Kruger, A.

    2007-12-01

    As the vision of a fully-functional Clear Creek Envirohydrologic Observatory comes closer to reality, the opportunities for significant watershed science advances in the near future become more apparent. As a starting point to approaching this vision, we focused on creating a working example of cyberinfrastructure in the hydrologic and environmental sciences. The system will integrate a broad range of technologies and ideas: wired and wireless sensors, low power wireless communication, embedded microcontrollers, commodity cellular networks, the internet, unattended quality assurance, metadata, relational databases, machine-to-machine communication, interfaces to hydrologic and environmental models, feedback, and external inputs. Hardware: An accomplishment to date is "in-house" developed sensor networking electronics to compliment commercially available communications. The first of these networkable sensors are dielectric soil moisture probes that are arrayed and equipped with wireless connectivity for communications. Commercially available data logging and telemetry-enabled systems deployed at the Clear Creek testbed include a Campbell Scientific CR1000 datalogger, a Redwing 100 cellular modem, a YA Series yagi antenna, a NP12 rechargeable battery, and a BP SX20U solar panel. This networking equipment has been coupled with Hach DS5X water quality sondes, DTS-12 turbidity probes and MicroLAB nutrient analyzers. Software: Our existing data model is an Arc Hydro-based geodatabase customized with applications for extraction and population of the database with third party data. The following third party data are acquired automatically and in real time into the Arc Hydro customized database: 1) geophysical data: 10m DEM and soil grids, soils; 2) land use/land cover data; and 3) eco-hydrological: radar-based rainfall estimates, stream gage, streamlines, and water quality data. A new processing software for data analysis of Acoustic Doppler Current Profilers (ADCP) measurements has been finalized. The software package provides mean flow field and turbulence characteristics obtained by operating the ADCP at fixed points or using the moving-boat approach. Current Work: The current development work is focused on extracting and populating the Clear Creek database with in-situ measurements acquired and transmitted in real time with sensors deployed in the Clear Creek watershed.

  17. Visions of the Cosmos

    NASA Astrophysics Data System (ADS)

    Collins Petersen, Carolyn; Brandt, John C.

    2003-11-01

    Introduction; 1. Eyes in the sky; 2. Telescopes: multi-frequency time machines; 3. Planets on a pixel; 4. The lives of stars; 5. Galaxies - tales of stellar cities; 6. The once and future universe; 7. Stargazing - the next generation; Glossary.

  18. A real-time surface inspection system for precision steel balls based on machine vision

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Ji; Tsai, Jhy-Cherng; Hsu, Ya-Chen

    2016-07-01

    Precision steel balls are one of the most fundament components for motion and power transmission parts and they are widely used in industrial machinery and the automotive industry. As precision balls are crucial for the quality of these products, there is an urgent need to develop a fast and robust system for inspecting defects of precision steel balls. In this paper, a real-time system for inspecting surface defects of precision steel balls is developed based on machine vision. The developed system integrates a dual-lighting system, an unfolding mechanism and inspection algorithms for real-time signal processing and defect detection. The developed system is tested under feeding speeds of 4 pcs s-1 with a detection rate of 99.94% and an error rate of 0.10%. The minimum detectable surface flaw area is 0.01 mm2, which meets the requirement for inspecting ISO grade 100 precision steel balls.

  19. A new approach based on Machine Learning for predicting corneal curvature (K1) and astigmatism in patients with keratoconus after intracorneal ring implantation.

    PubMed

    Valdés-Mas, M A; Martín-Guerrero, J D; Rupérez, M J; Pastor, F; Dualde, C; Monserrat, C; Peris-Martínez, C

    2014-08-01

    Keratoconus (KC) is the most common type of corneal ectasia. A corneal transplantation was the treatment of choice until the last decade. However, intra-corneal ring implantation has become more and more common, and it is commonly used to treat KC thus avoiding a corneal transplantation. This work proposes a new approach based on Machine Learning to predict the vision gain of KC patients after ring implantation. That vision gain is assessed by means of the corneal curvature and the astigmatism. Different models were proposed; the best results were achieved by an artificial neural network based on the Multilayer Perceptron. The error provided by the best model was 0.97D of corneal curvature and 0.93D of astigmatism. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  20. A survey of camera error sources in machine vision systems

    NASA Astrophysics Data System (ADS)

    Jatko, W. B.

    In machine vision applications, such as an automated inspection line, television cameras are commonly used to record scene intensity in a computer memory or frame buffer. Scene data from the image sensor can then be analyzed with a wide variety of feature-detection techniques. Many algorithms found in textbooks on image processing make the implicit simplifying assumption of an ideal input image with clearly defined edges and uniform illumination. The ideal image model is helpful to aid the student in understanding the principles of operation, but when these algorithms are blindly applied to real-world images the results can be unsatisfactory. This paper examines some common measurement errors found in camera sensors and their underlying causes, and possible methods of error compensation. The role of the camera in a typical image-processing system is discussed, with emphasis on the origination of signal distortions. The effects of such things as lighting, optics, and sensor characteristics are considered.

  1. Fragrant pear sexuality recognition with machine vision

    NASA Astrophysics Data System (ADS)

    Ma, Benxue; Ying, Yibin

    2006-10-01

    In this research, a method to identify Kuler fragrant pear's sexuality with machine vision was developed. Kuler fragrant pear has male pear and female pear. They have an obvious difference in favor. To detect the sexuality of Kuler fragrant pear, images of fragrant pear were acquired by CCD color camera. Before feature extraction, some preprocessing is conducted on the acquired images to remove noise and unnecessary contents. Color feature, perimeter feature and area feature of fragrant pear bottom image were extracted by digital image processing technique. And the fragrant pear sexuality was determined by complexity obtained from perimeter and area. In this research, using 128 Kurle fragrant pears as samples, good recognition rate between the male pear and the female pear was obtained for Kurle pear's sexuality detection (82.8%). Result shows this method could detect male pear and female pear with a good accuracy.

  2. Nondestructive Detection of the Internalquality of Apple Using X-Ray and Machine Vision

    NASA Astrophysics Data System (ADS)

    Yang, Fuzeng; Yang, Liangliang; Yang, Qing; Kang, Likui

    The internal quality of apple is impossible to be detected by eyes in the procedure of sorting, which could reduce the apple’s quality reaching market. This paper illustrates an instrument using X-ray and machine vision. The following steps were introduced to process the X-ray image in order to determine the mould core apple. Firstly, lifting wavelet transform was used to get a low frequency image and three high frequency images. Secondly, we enhanced the low frequency image through image’s histogram equalization. Then, the edge of each apple's image was detected using canny operator. Finally, a threshold was set to clarify mould core and normal apple according to the different length of the apple core’s diameter. The experimental results show that this method could on-line detect the mould core apple with less time consuming, less than 0.03 seconds per apple, and the accuracy could reach 92%.

  3. Computational Analysis of Behavior.

    PubMed

    Egnor, S E Roian; Branson, Kristin

    2016-07-08

    In this review, we discuss the emerging field of computational behavioral analysis-the use of modern methods from computer science and engineering to quantitatively measure animal behavior. We discuss aspects of experiment design important to both obtaining biologically relevant behavioral data and enabling the use of machine vision and learning techniques for automation. These two goals are often in conflict. Restraining or restricting the environment of the animal can simplify automatic behavior quantification, but it can also degrade the quality or alter important aspects of behavior. To enable biologists to design experiments to obtain better behavioral measurements, and computer scientists to pinpoint fruitful directions for algorithm improvement, we review known effects of artificial manipulation of the animal on behavior. We also review machine vision and learning techniques for tracking, feature extraction, automated behavior classification, and automated behavior discovery, the assumptions they make, and the types of data they work best with.

  4. Machine vision application in animal trajectory tracking.

    PubMed

    Koniar, Dušan; Hargaš, Libor; Loncová, Zuzana; Duchoň, František; Beňo, Peter

    2016-04-01

    This article was motivated by the doctors' demand to make a technical support in pathologies of gastrointestinal tract research [10], which would be based on machine vision tools. Proposed solution should be less expensive alternative to already existing RF (radio frequency) methods. The objective of whole experiment was to evaluate the amount of animal motion dependent on degree of pathology (gastric ulcer). In the theoretical part of the article, several methods of animal trajectory tracking are presented: two differential methods based on background subtraction, the thresholding methods based on global and local threshold and the last method used for animal tracking was the color matching with a chosen template containing a searched spectrum of colors. The methods were tested offline on five video samples. Each sample contained situation with moving guinea pig locked in a cage under various lighting conditions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  5. KENNEDY SPACE CENTER, FLA. - Garland V. Stewart Magnet Middle School, a NASA Explorer School (NES) in Tampa, Fla., is the site where Center Director Jim Kennedy and astronaut Kay Hire shared the agency’s new vision for space exploration with the next generation of explorers. Kennedy talked with students about our destiny as explorers, NASA’s stepping stone approach to exploring Earth, the Moon, Mars and beyond, how space impacts our lives, and how people and machines rely on each other in space.

    NASA Image and Video Library

    2004-02-20

    KENNEDY SPACE CENTER, FLA. - Garland V. Stewart Magnet Middle School, a NASA Explorer School (NES) in Tampa, Fla., is the site where Center Director Jim Kennedy and astronaut Kay Hire shared the agency’s new vision for space exploration with the next generation of explorers. Kennedy talked with students about our destiny as explorers, NASA’s stepping stone approach to exploring Earth, the Moon, Mars and beyond, how space impacts our lives, and how people and machines rely on each other in space.

  6. 75 FR 41894 - Wapakoneta Machine Company, Currently Known as EF Industrial Technologies, Inc., Wapakoneta, OH...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-19

    ... Company, Currently Known as EF Industrial Technologies, Inc., Wapakoneta, OH; Amended Certification... of early 2010, Wapakoneta Machine Company is currently known as EF Industrial Technologies, Inc. Some... Wapakoneta Machine Company, currently known as EF Industrial Technologies, Inc., Wapakoneta, Ohio became...

  7. The Future of Learning Technology: Some Tentative Predictions

    ERIC Educational Resources Information Center

    Rushby, Nick

    2013-01-01

    This paper is a snapshot of an evolving vision of what the future may hold for learning technology. It offers three personal visions of the future and raises many questions that need to be explored if learning technology is to realise its full potential.

  8. Sensor fusion II: Human and machine strategies; Proceedings of the Meeting, Philadelphia, PA, Nov. 6-9, 1989

    NASA Technical Reports Server (NTRS)

    Schenker, Paul S. (Editor)

    1990-01-01

    Various papers on human and machine strategies in sensor fusion are presented. The general topics addressed include: active vision, measurement and analysis of visual motion, decision models for sensor fusion, implementation of sensor fusion algorithms, applying sensor fusion to image analysis, perceptual modules and their fusion, perceptual organization and object recognition, planning and the integration of high-level knowledge with perception, using prior knowledge and context in sensor fusion.

  9. An Rx for 20/20 Vision: Vision Planning and Education.

    ERIC Educational Resources Information Center

    Chrisman, Gerald J.; Holliday, Clifford R.

    1996-01-01

    Discusses the Dallas Independent School District's decision to adopt an integrated technology infrastructure and the importance of vision planning for long term goals. Outlines the vision planning process: first draft; environmental projection; restatement of vision in terms of market projections, anticipated customer needs, suspected competitor…

  10. Convolutional neural network guided blue crab knuckle detection for autonomous crab meat picking machine

    NASA Astrophysics Data System (ADS)

    Wang, Dongyi; Vinson, Robert; Holmes, Maxwell; Seibel, Gary; Tao, Yang

    2018-04-01

    The Atlantic blue crab is among the highest-valued seafood found in the American Eastern Seaboard. Currently, the crab processing industry is highly dependent on manual labor. However, there is great potential for vision-guided intelligent machines to automate the meat picking process. Studies show that the back-fin knuckles are robust features containing information about a crab's size, orientation, and the position of the crab's meat compartments. Our studies also make it clear that detecting the knuckles reliably in images is challenging due to the knuckle's small size, anomalous shape, and similarity to joints in the legs and claws. An accurate and reliable computer vision algorithm was proposed to detect the crab's back-fin knuckles in digital images. Convolutional neural networks (CNNs) can localize rough knuckle positions with 97.67% accuracy, transforming a global detection problem into a local detection problem. Compared to the rough localization based on human experience or other machine learning classification methods, the CNN shows the best localization results. In the rough knuckle position, a k-means clustering method is able to further extract the exact knuckle positions based on the back-fin knuckle color features. The exact knuckle position can help us to generate a crab cutline in XY plane using a template matching method. This is a pioneering research project in crab image analysis and offers advanced machine intelligence for automated crab processing.

  11. Wire connector classification with machine vision and a novel hybrid SVM

    NASA Astrophysics Data System (ADS)

    Chauhan, Vedang; Joshi, Keyur D.; Surgenor, Brian W.

    2018-04-01

    A machine vision-based system has been developed and tested that uses a novel hybrid Support Vector Machine (SVM) in a part inspection application with clear plastic wire connectors. The application required the system to differentiate between 4 different known styles of connectors plus one unknown style, for a total of 5 classes. The requirement to handle an unknown class is what necessitated the hybrid approach. The system was trained with the 4 known classes and tested with 5 classes (the 4 known plus the 1 unknown). The hybrid classification approach used two layers of SVMs: one layer was semi-supervised and the other layer was supervised. The semi-supervised SVM was a special case of unsupervised machine learning that classified test images as one of the 4 known classes (to accept) or as the unknown class (to reject). The supervised SVM classified test images as one of the 4 known classes and consequently would give false positives (FPs). Two methods were tested. The difference between the methods was that the order of the layers was switched. The method with the semi-supervised layer first gave an accuracy of 80% with 20% FPs. The method with the supervised layer first gave an accuracy of 98% with 0% FPs. Further work is being conducted to see if the hybrid approach works with other applications that have an unknown class requirement.

  12. Envisioning engineering education and practice in the coming intelligence convergence era — a complex adaptive systems approach

    NASA Astrophysics Data System (ADS)

    Noor, Ahmed K.

    2013-12-01

    Some of the recent attempts for improving and transforming engineering education are reviewed. The attempts aim at providing the entry level engineers with the skills needed to address the challenges of future large-scale complex systems and projects. Some of the frontier sectors and future challenges for engineers are outlined. The major characteristics of the coming intelligence convergence era (the post-information age) are identified. These include the prevalence of smart devices and environments, the widespread applications of anticipatory computing and predictive / prescriptive analytics, as well as a symbiotic relationship between humans and machines. Devices and machines will be able to learn from, and with, humans in a natural collaborative way. The recent game changers in learnscapes (learning paradigms, technologies, platforms, spaces, and environments) that can significantly impact engineering education in the coming era are identified. Among these are open educational resources, knowledge-rich classrooms, immersive interactive 3D learning, augmented reality, reverse instruction / flipped classroom, gamification, robots in the classroom, and adaptive personalized learning. Significant transformative changes in, and mass customization of, learning are envisioned to emerge from the synergistic combination of the game changers and other technologies. The realization of the aforementioned vision requires the development of a new multidisciplinary framework of emergent engineering for relating innovation, complexity and cybernetics, within the future learning environments. The framework can be used to treat engineering education as a complex adaptive system, with dynamically interacting and communicating components (instructors, individual, small, and large groups of learners). The emergent behavior resulting from the interactions can produce progressively better, and continuously improving, learning environment. As a first step towards the realization of the vision, intelligent adaptive cyber-physical ecosystems need to be developed to facilitate collaboration between the various stakeholders of engineering education, and to accelerate the development of a skilled engineering workforce. The major components of the ecosystems include integrated knowledge discovery and exploitation facilities, blended learning and research spaces, novel ultra-intelligent software agents, multimodal and autonomous interfaces, and networked cognitive and tele-presence robots.

  13. Design and Development of a High Speed Sorting System Based on Machine Vision Guiding

    NASA Astrophysics Data System (ADS)

    Zhang, Wenchang; Mei, Jiangping; Ding, Yabin

    In this paper, a vision-based control strategy to perform high speed pick-and-place tasks on automation product line is proposed, and relevant control software is develop. Using Delta robot to control a sucker to grasp disordered objects from one moving conveyer and then place them on the other in order. CCD camera gets one picture every time the conveyer moves a distance of ds. Objects position and shape are got after image processing. Target tracking method based on "Servo motor + synchronous conveyer" is used to fulfill the high speed porting operation real time. Experiments conducted on Delta robot sorting system demonstrate the efficiency and validity of the proposed vision-control strategy.

  14. An Autonomous Gps-Denied Unmanned Vehicle Platform Based on Binocular Vision for Planetary Exploration

    NASA Astrophysics Data System (ADS)

    Qin, M.; Wan, X.; Shao, Y. Y.; Li, S. Y.

    2018-04-01

    Vision-based navigation has become an attractive solution for autonomous navigation for planetary exploration. This paper presents our work of designing and building an autonomous vision-based GPS-denied unmanned vehicle and developing an ARFM (Adaptive Robust Feature Matching) based VO (Visual Odometry) software for its autonomous navigation. The hardware system is mainly composed of binocular stereo camera, a pan-and tilt, a master machine, a tracked chassis. And the ARFM-based VO software system contains four modules: camera calibration, ARFM-based 3D reconstruction, position and attitude calculation, BA (Bundle Adjustment) modules. Two VO experiments were carried out using both outdoor images from open dataset and indoor images captured by our vehicle, the results demonstrate that our vision-based unmanned vehicle is able to achieve autonomous localization and has the potential for future planetary exploration.

  15. Policy and Technology Readiness: Engaging the User and Developer Community to Develop a Research Roadmap

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olson, Jarrod; Barr, Jonathan L.; Burtner, Edwin R.

    A key challenge for research roadmapping in the crisis response and management domain is articulation of a shared vision that describes what the future can and should include. Visioning allows for far-reaching stakeholder engagement that can properly align research with stakeholders needs. Engagement includes feedback from researchers, policy makers, general public, and end-users on technical and non-technical factors. This work articulates a process and framework for the construction and maintenance of a stakeholder-centric research vision and roadmap in the emergency management domain. This novel roadmapping process integrates three pieces: analysis of the research and technology landscape, visioning, and stakeholder engagement.more » Our structured engagement process elicits research foci for the roadmap based on relevance to stakeholder mission, identifies collaborators, and builds consensus around the roadmap priorities. We find that the vision process and vision storyboard helps SMEs conceptualize and discuss a technology's strengths, weaknesses, and alignment with needs« less

  16. Machine-Vision Aids for Improved Flight Operations

    NASA Technical Reports Server (NTRS)

    Menon, P. K.; Chatterji, Gano B.

    1996-01-01

    The development of machine vision based pilot aids to help reduce night approach and landing accidents is explored. The techniques developed are motivated by the desire to use the available information sources for navigation such as the airport lighting layout, attitude sensors and Global Positioning System to derive more precise aircraft position and orientation information. The fact that airport lighting geometry is known and that images of airport lighting can be acquired by the camera, has lead to the synthesis of machine vision based algorithms for runway relative aircraft position and orientation estimation. The main contribution of this research is the synthesis of seven navigation algorithms based on two broad families of solutions. The first family of solution methods consists of techniques that reconstruct the airport lighting layout from the camera image and then estimate the aircraft position components by comparing the reconstructed lighting layout geometry with the known model of the airport lighting layout geometry. The second family of methods comprises techniques that synthesize the image of the airport lighting layout using a camera model and estimate the aircraft position and orientation by comparing this image with the actual image of the airport lighting acquired by the camera. Algorithms 1 through 4 belong to the first family of solutions while Algorithms 5 through 7 belong to the second family of solutions. Algorithms 1 and 2 are parameter optimization methods, Algorithms 3 and 4 are feature correspondence methods and Algorithms 5 through 7 are Kalman filter centered algorithms. Results of computer simulation are presented to demonstrate the performance of all the seven algorithms developed.

  17. 78 FR 55086 - Center for Scientific Review; Notice of Closed Meetings

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-09

    ...: Emerging Technologies and Training Neurosciences Integrated Review Group; Bioengineering of Neuroscience, Vision and Low Vision Technologies Study Section. Date: October 3-4, 2013. Time: 8:00 a.m. to 11:00 a.m... . Name of Committee: Bioengineering Sciences & Technologies Integrated Review Group; Biomaterials and...

  18. Transforming revenue management.

    PubMed

    Silveria, Richard; Alliegro, Debra; Nudd, Steven

    2008-11-01

    Healthcare organizations that want to undertake a patient administrative/revenue management transformation should: Define the vision with underlying business objectives and key performance measures. Strategically partner with key vendors for business process development and technology design. Create a program organization and governance infrastructure. Develop a corporate design model that defines the standards for operationalizing the vision. Execute the vision through technology deployment and corporate design model implementation.

  19. To See Anew: New Technologies Are Moving Rapidly Toward Restoring or Enabling Vision in the Blind.

    PubMed

    Grifantini, Kristina

    2017-01-01

    Humans have been using technology to improve their vision for many decades. Eyeglasses, contact lenses, and, more recently, laser-based surgeries are commonly employed to remedy vision problems, both minor and major. But options are far fewer for those who have not seen since birth or who have reached stages of blindness in later life.

  20. Another 25 Years of AIED? Challenges and Opportunities for Intelligent Educational Technologies of the Future

    ERIC Educational Resources Information Center

    Pinkwart, Niels

    2016-01-01

    This paper attempts an analysis of some current trends and future developments in computer science, education, and educational technology. Based on these trends, two possible future predictions of AIED are presented in the form of a utopian vision and a dystopian vision. A comparison of these two visions leads to seven challenges that AIED might…

  1. Recognizing sights, smells, and sounds with gnostic fields.

    PubMed

    Kanan, Christopher

    2013-01-01

    Mammals rely on vision, audition, and olfaction to remotely sense stimuli in their environment. Determining how the mammalian brain uses this sensory information to recognize objects has been one of the major goals of psychology and neuroscience. Likewise, researchers in computer vision, machine audition, and machine olfaction have endeavored to discover good algorithms for stimulus classification. Almost 50 years ago, the neuroscientist Jerzy Konorski proposed a theoretical model in his final monograph in which competing sets of "gnostic" neurons sitting atop sensory processing hierarchies enabled stimuli to be robustly categorized, despite variations in their presentation. Much of what Konorski hypothesized has been remarkably accurate, and neurons with gnostic-like properties have been discovered in visual, aural, and olfactory brain regions. Surprisingly, there have not been any attempts to directly transform his theoretical model into a computational one. Here, I describe the first computational implementation of Konorski's theory. The model is not domain specific, and it surpasses the best machine learning algorithms on challenging image, music, and olfactory classification tasks, while also being simpler. My results suggest that criticisms of exemplar-based models of object recognition as being computationally intractable due to limited neural resources are unfounded.

  2. Recognizing Sights, Smells, and Sounds with Gnostic Fields

    PubMed Central

    Kanan, Christopher

    2013-01-01

    Mammals rely on vision, audition, and olfaction to remotely sense stimuli in their environment. Determining how the mammalian brain uses this sensory information to recognize objects has been one of the major goals of psychology and neuroscience. Likewise, researchers in computer vision, machine audition, and machine olfaction have endeavored to discover good algorithms for stimulus classification. Almost 50 years ago, the neuroscientist Jerzy Konorski proposed a theoretical model in his final monograph in which competing sets of “gnostic” neurons sitting atop sensory processing hierarchies enabled stimuli to be robustly categorized, despite variations in their presentation. Much of what Konorski hypothesized has been remarkably accurate, and neurons with gnostic-like properties have been discovered in visual, aural, and olfactory brain regions. Surprisingly, there have not been any attempts to directly transform his theoretical model into a computational one. Here, I describe the first computational implementation of Konorski's theory. The model is not domain specific, and it surpasses the best machine learning algorithms on challenging image, music, and olfactory classification tasks, while also being simpler. My results suggest that criticisms of exemplar-based models of object recognition as being computationally intractable due to limited neural resources are unfounded. PMID:23365648

  3. Vision based nutrient deficiency classification in maize plants using multi class support vector machines

    NASA Astrophysics Data System (ADS)

    Leena, N.; Saju, K. K.

    2018-04-01

    Nutritional deficiencies in plants are a major concern for farmers as it affects productivity and thus profit. The work aims to classify nutritional deficiencies in maize plant in a non-destructive mannerusing image processing and machine learning techniques. The colored images of the leaves are analyzed and classified with multi-class support vector machine (SVM) method. Several images of maize leaves with known deficiencies like nitrogen, phosphorous and potassium (NPK) are used to train the SVM classifier prior to the classification of test images. The results show that the method was able to classify and identify nutritional deficiencies.

  4. Machine tool task force

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sutton, G.P.

    1980-10-22

    The Machine Tool Task Force (MTTF) is a multidisciplined team of international experts, whose mission was to investigate the state of the art of machine tool technology, to identify promising future directions of that technology for both the US government and private industry, and to disseminate the findings of its research in a conference and through the publication of a final report. MTTF was a two and one-half year effort that involved the participation of 122 experts in the specialized technologies of machine tools and in the management of machine tool operations. The scope of the MTTF was limited tomore » cutting-type or material-removal-type machine tools, because they represent the major share and value of all machine tools now installed or being built. The activities of the MTTF and the technical, commercial and economic signifiance of recommended activities for improving machine tool technology are discussed. (LCL)« less

  5. Resources and Information for Parents about Braille

    MedlinePlus

    ... Vision Loss Eye Conditions Losing Your Sight? Using Technology For Parents of Blind Children For Job Seekers For Seniors Braille Video Description Programs & Services Technology Evaluation Center on Vision Loss Learning Center Professional ...

  6. Job Prospects for Manufacturing Engineers.

    ERIC Educational Resources Information Center

    Basta, Nicholas

    1985-01-01

    Coming from a variety of disciplines, manufacturing engineers are keys to industry's efforts to modernize, with demand exceeding supply. The newest and fastest-growing areas include machine vision, composite materials, and manufacturing automation protocols, each of which is briefly discussed. (JN)

  7. Beyond speculative robot ethics: a vision assessment study on the future of the robotic caretaker.

    PubMed

    van der Plas, Arjanna; Smits, Martijntje; Wehrmann, Caroline

    2010-11-01

    In this article we develop a dialogue model for robot technology experts and designated users to discuss visions on the future of robotics in long-term care. Our vision assessment study aims for more distinguished and more informed visions on future robots. Surprisingly, our experiment also led to some promising co-designed robot concepts in which jointly articulated moral guidelines are embedded. With our model, we think to have designed an interesting response on a recent call for a less speculative ethics of technology by encouraging discussions about the quality of positive and negative visions on the future of robotics.

  8. Danville Community College Information Technology General Plan, 1998-99.

    ERIC Educational Resources Information Center

    Danville Community Coll., VA.

    This document describes technology usage, infrastructure and planning for Danville Community College. The Plan is divided into four sections: Introduction, Vision and Mission, Applications, and Infrastructure. The four major goals identified in Vision and Mission are: (1) to ensure the successful use of all technologies through continued training…

  9. Defect detection and classification of machined surfaces under multiple illuminant directions

    NASA Astrophysics Data System (ADS)

    Liao, Yi; Weng, Xin; Swonger, C. W.; Ni, Jun

    2010-08-01

    Continuous improvement of product quality is crucial to the successful and competitive automotive manufacturing industry in the 21st century. The presence of surface porosity located on flat machined surfaces such as cylinder heads/blocks and transmission cases may allow leaks of coolant, oil, or combustion gas between critical mating surfaces, thus causing damage to the engine or transmission. Therefore 100% inline inspection plays an important role for improving product quality. Although the techniques of image processing and machine vision have been applied to machined surface inspection and well improved in the past 20 years, in today's automotive industry, surface porosity inspection is still done by skilled humans, which is costly, tedious, time consuming and not capable of reliably detecting small defects. In our study, an automated defect detection and classification system for flat machined surfaces has been designed and constructed. In this paper, the importance of the illuminant direction in a machine vision system was first emphasized and then the surface defect inspection system under multiple directional illuminations was designed and constructed. After that, image processing algorithms were developed to realize 5 types of 2D or 3D surface defects (pore, 2D blemish, residue dirt, scratch, and gouge) detection and classification. The steps of image processing include: (1) image acquisition and contrast enhancement (2) defect segmentation and feature extraction (3) defect classification. An artificial machined surface and an actual automotive part: cylinder head surface were tested and, as a result, microscopic surface defects can be accurately detected and assigned to a surface defect class. The cycle time of this system can be sufficiently fast that implementation of 100% inline inspection is feasible. The field of view of this system is 150mm×225mm and the surfaces larger than the field of view can be stitched together in software.

  10. Computer vision cracks the leaf code

    PubMed Central

    Wilf, Peter; Zhang, Shengping; Chikkerur, Sharat; Little, Stefan A.; Wing, Scott L.; Serre, Thomas

    2016-01-01

    Understanding the extremely variable, complex shape and venation characters of angiosperm leaves is one of the most challenging problems in botany. Machine learning offers opportunities to analyze large numbers of specimens, to discover novel leaf features of angiosperm clades that may have phylogenetic significance, and to use those characters to classify unknowns. Previous computer vision approaches have primarily focused on leaf identification at the species level. It remains an open question whether learning and classification are possible among major evolutionary groups such as families and orders, which usually contain hundreds to thousands of species each and exhibit many times the foliar variation of individual species. Here, we tested whether a computer vision algorithm could use a database of 7,597 leaf images from 2,001 genera to learn features of botanical families and orders, then classify novel images. The images are of cleared leaves, specimens that are chemically bleached, then stained to reveal venation. Machine learning was used to learn a codebook of visual elements representing leaf shape and venation patterns. The resulting automated system learned to classify images into families and orders with a success rate many times greater than chance. Of direct botanical interest, the responses of diagnostic features can be visualized on leaf images as heat maps, which are likely to prompt recognition and evolutionary interpretation of a wealth of novel morphological characters. With assistance from computer vision, leaves are poised to make numerous new contributions to systematic and paleobotanical studies. PMID:26951664

  11. MLBCD: a machine learning tool for big clinical data.

    PubMed

    Luo, Gang

    2015-01-01

    Predictive modeling is fundamental for extracting value from large clinical data sets, or "big clinical data," advancing clinical research, and improving healthcare. Machine learning is a powerful approach to predictive modeling. Two factors make machine learning challenging for healthcare researchers. First, before training a machine learning model, the values of one or more model parameters called hyper-parameters must typically be specified. Due to their inexperience with machine learning, it is hard for healthcare researchers to choose an appropriate algorithm and hyper-parameter values. Second, many clinical data are stored in a special format. These data must be iteratively transformed into the relational table format before conducting predictive modeling. This transformation is time-consuming and requires computing expertise. This paper presents our vision for and design of MLBCD (Machine Learning for Big Clinical Data), a new software system aiming to address these challenges and facilitate building machine learning predictive models using big clinical data. The paper describes MLBCD's design in detail. By making machine learning accessible to healthcare researchers, MLBCD will open the use of big clinical data and increase the ability to foster biomedical discovery and improve care.

  12. Find Services for People Who Are Blind or Visually Impaired

    MedlinePlus

    ... Vision Loss Eye Conditions Losing Your Sight? Using Technology For Parents of Blind Children For Job Seekers For Seniors Braille Video Description Programs & Services Technology Evaluation Center on Vision Loss Learning Center Professional ...

  13. Overview of the Machine-Tool Task Force

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sutton, G.P.

    1981-06-08

    The Machine Tool Task Force, (MTTF) surveyed the state of the art of machine tool technology for material removal for two and one-half years. This overview gives a brief summary of the approach, specific subjects covered, principal conclusions and some of the key recommendations aimed at improving the technology and advancing the productivity of machine tools. The Task Force consisted of 123 experts from the US and other countries. Their findings are documented in a five-volume report, Technology of Machine Tools.

  14. Research on the tool holder mode in high speed machining

    NASA Astrophysics Data System (ADS)

    Zhenyu, Zhao; Yongquan, Zhou; Houming, Zhou; Xiaomei, Xu; Haibin, Xiao

    2018-03-01

    High speed machining technology can improve the processing efficiency and precision, but also reduce the processing cost. Therefore, the technology is widely regarded in the industry. With the extensive application of high-speed machining technology, high-speed tool system has higher and higher requirements on the tool chuck. At present, in high speed precision machining, several new kinds of clip heads are as long as there are heat shrinkage tool-holder, high-precision spring chuck, hydraulic tool-holder, and the three-rib deformation chuck. Among them, the heat shrinkage tool-holder has the advantages of high precision, high clamping force, high bending rigidity and dynamic balance, etc., which are widely used. Therefore, it is of great significance to research the new requirements of the machining tool system. In order to adapt to the requirement of high speed machining precision machining technology, this paper expounds the common tool holder technology of high precision machining, and proposes how to select correctly tool clamping system in practice. The characteristics and existing problems are analyzed in the tool clamping system.

  15. The twelfth annual Intelligent Ground Vehicle Competition: team approaches to intelligent vehicles

    NASA Astrophysics Data System (ADS)

    Theisen, Bernard L.; Maslach, Daniel

    2004-10-01

    The Intelligent Ground Vehicle Competition (IGVC) is one of three, unmanned systems, student competitions that were founded by the Association for Unmanned Vehicle Systems International (AUVSI) in the 1990s. The IGVC is a multidisciplinary exercise in product realization that challenges college engineering student teams to integrate advanced control theory, machine vision, vehicular electronics, and mobile platform fundamentals to design and build an unmanned system. Both U.S. and international teams focus on developing a suite of dual-use technologies to equip ground vehicles of the future with intelligent driving capabilities. Over the past 12 years, the competition has challenged undergraduate, graduate and Ph.D. students with real world applications in intelligent transportation systems, the military and manufacturing automation. To date, teams from over 43 universities and colleges have participated. This paper describes some of the applications of the technologies required by this competition and discusses the educational benefits. The primary goal of the IGVC is to advance engineering education in intelligent vehicles and related technologies. The employment and professional networking opportunities created for students and industrial sponsors through a series of technical events over the three-day competition are highlighted. Finally, an assessment of the competition based on participant feedback is presented.

  16. The 21st annual intelligent ground vehicle competition: robotists for the future

    NASA Astrophysics Data System (ADS)

    Theisen, Bernard L.

    2013-12-01

    The Intelligent Ground Vehicle Competition (IGVC) is one of four, unmanned systems, student competitions that were founded by the Association for Unmanned Vehicle Systems International (AUVSI). The IGVC is a multidisciplinary exercise in product realization that challenges college engineering student teams to integrate advanced control theory, machine vision, vehicular electronics and mobile platform fundamentals to design and build an unmanned system. Teams from around the world focus on developing a suite of dual-use technologies to equip ground vehicles of the future with intelligent driving capabilities. Over the past 21 years, the competition has challenged undergraduate, graduate and Ph.D. students with real world applications in intelligent transportation systems, the military and manufacturing automation. To date, teams from over 80 universities and colleges have participated. This paper describes some of the applications of the technologies required by this competition and discusses the educational benefits. The primary goal of the IGVC is to advance engineering education in intelligent vehicles and related technologies. The employment and professional networking opportunities created for students and industrial sponsors through a series of technical events over the fourday competition are highlighted. Finally, an assessment of the competition based on participation is presented.

  17. The 20th annual intelligent ground vehicle competition: building a generation of robotists

    NASA Astrophysics Data System (ADS)

    Theisen, Bernard L.; Kosinski, Andrew

    2013-01-01

    The Intelligent Ground Vehicle Competition (IGVC) is one of four, unmanned systems, student competitions that were founded by the Association for Unmanned Vehicle Systems International (AUVSI). The IGVC is a multidisciplinary exercise in product realization that challenges college engineering student teams to integrate advanced control theory, machine vision, vehicular electronics and mobile platform fundamentals to design and build an unmanned system. Teams from around the world focus on developing a suite of dual-use technologies to equip ground vehicles of the future with intelligent driving capabilities. Over the past 20 years, the competition has challenged undergraduate, graduate and Ph.D. students with real world applications in intelligent transportation systems, the military and manufacturing automation. To date, teams from over 80 universities and colleges have participated. This paper describes some of the applications of the technologies required by this competition and discusses the educational benefits. The primary goal of the IGVC is to advance engineering education in intelligent vehicles and related technologies. The employment and professional networking opportunities created for students and industrial sponsors through a series of technical events over the four-day competition are highlighted. Finally, an assessment of the competition based on participation is presented.

  18. Image understanding and the man-machine interface II; Proceedings of the Meeting, Los Angeles, CA, Jan. 17, 18, 1989

    NASA Technical Reports Server (NTRS)

    Barrett, Eamon B. (Editor); Pearson, James J. (Editor)

    1989-01-01

    Image understanding concepts and models, image understanding systems and applications, advanced digital processors and software tools, and advanced man-machine interfaces are among the topics discussed. Particular papers are presented on such topics as neural networks for computer vision, object-based segmentation and color recognition in multispectral images, the application of image algebra to image measurement and feature extraction, and the integration of modeling and graphics to create an infrared signal processing test bed.

  19. Right Sizing the Force: Restructuring the Marine Light Attack Helicopter (HML/A) Squadron to Better Meet the Emerging Threat

    DTIC Science & Technology

    2009-04-29

    visual coverage armed with responsive, high volume offensive or defensive door mounted machine guns and nearly 3600 weapons coverage. In addition, a fixed...forward gun option and forward firing rocket capacity substantially expand firepower options. Team this crew and machine with the world’s premiere...escort. 1 Headquarters, U. S. Marine Corps, Vision & Strategy 2025, (Washington, DC: Headquarters, U.S. Marine Corps, June 18, 2008), 9. 2 Scott Atwood

  20. Magic and artifice in the collection of Athanasius Kircher.

    PubMed

    Waddell, Mark A

    2010-03-01

    Situated at the center of intellectual life in baroque Rome, the museum administered by the Jesuit naturalist Athanasius Kircher (1602-1680) simultaneously instructed and bemused its audiences with an exuberant mix of exotic animals, classical art and technological marvels. Kircher's playful use of spectacle and his irrepressible fondness for "magic" were derided by contemporaries as frivolous wonder-mongering, but the lavish machines at the heart of his museum were more than mere showpieces. Instead, they presented audiences with a compelling vision of the natural world in which the hidden foundations of the universe could be captured and displayed by artifice. Kircher's collection was in itself a vast instrument of revelation, conceived on a grander scale than the telescope of Galileo but rooted all the same in contemporary scientific culture. 2009 Elsevier Ltd. All rights reserved.

  1. Automated Monitoring and Analysis of Social Behavior in Drosophila

    PubMed Central

    Dankert, Heiko; Wang, Liming; Hoopfer, Eric D.; Anderson, David J.; Perona, Pietro

    2009-01-01

    We introduce a method based on machine vision for automatically measuring aggression and courtship in Drosophila melanogaster. The genetic and neural circuit bases of these innate social behaviors are poorly understood. High-throughput behavioral screening in this genetically tractable model organism is a potentially powerful approach, but it is currently very laborious. Our system monitors interacting pairs of flies, and computes their location, orientation and wing posture. These features are used for detecting behaviors exhibited during aggression and courtship. Among these, wing threat, lunging and tussling are specific to aggression; circling, wing extension (courtship “song”) and copulation are specific to courtship; locomotion and chasing are common to both. Ethograms may be constructed automatically from these measurements, saving considerable time and effort. This technology should enable large-scale screens for genes and neural circuits controlling courtship and aggression. PMID:19270697

  2. 3D geometric phase analysis and its application in 3D microscopic morphology measurement

    NASA Astrophysics Data System (ADS)

    Zhu, Ronghua; Shi, Wenxiong; Cao, Quankun; Liu, Zhanwei; Guo, Baoqiao; Xie, Huimin

    2018-04-01

    Although three-dimensional (3D) morphology measurement has been widely applied on the macro-scale, there is still a lack of 3D measurement technology on the microscopic scale. In this paper, a microscopic 3D measurement technique based on the 3D-geometric phase analysis (GPA) method is proposed. In this method, with machine vision and phase matching, the traditional GPA method is extended to three dimensions. Using this method, 3D deformation measurement on the micro-scale can be realized using a light microscope. Simulation experiments were conducted in this study, and the results demonstrate that the proposed method has a good anti-noise ability. In addition, the 3D morphology of the necking zone in a tensile specimen was measured, and the results demonstrate that this method is feasible.

  3. Symposium on Aviation Psychology, 1st, Ohio State University, Columbus, OH, April 21, 22, 1981, Proceedings

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The impact of modern technology on the role, responsibility, authority, and performance of human operators in modern aircraft and ATC systems was examined in terms of principles defined by Paul Fitts. Research into human factors in aircraft operations and the use of human factors engineering for aircraft safety improvements were discussed, and features of the man-machine interface in computerized cockpit warning systems are examined. The design and operational features of computerized avionics displays and HUDs are described, along with results of investigations into pilot decision-making behavior, aircrew procedural compliance, and aircrew judgment training programs. Experiments in vision and visual perception are detailed, as are behavioral studies of crew workload, coordination, and complement. The effectiveness of pilot selection, screening, and training techniques are assessed, as are methods for evaluating pilot performance.

  4. Camera-based micro interferometer for distance sensing

    NASA Astrophysics Data System (ADS)

    Will, Matthias; Schädel, Martin; Ortlepp, Thomas

    2017-12-01

    Interference of light provides a high precision, non-contact and fast method for measurement method for distances. Therefore this technology dominates in high precision systems. However, in the field of compact sensors capacitive, resistive or inductive methods dominates. The reason is, that the interferometric system has to be precise adjusted and needs a high mechanical stability. As a result, we have usual high-priced complex systems not suitable in the field of compact sensors. To overcome these we developed a new concept for a very small interferometric sensing setup. We combine a miniaturized laser unit, a low cost pixel detector and machine vision routines to realize a demonstrator for a Michelson type micro interferometer. We demonstrate a low cost sensor smaller 1cm3 including all electronics and demonstrate distance sensing up to 30 cm and resolution in nm range.

  5. A comparative study of machine learning models for ethnicity classification

    NASA Astrophysics Data System (ADS)

    Trivedi, Advait; Bessie Amali, D. Geraldine

    2017-11-01

    This paper endeavours to adopt a machine learning approach to solve the problem of ethnicity recognition. Ethnicity identification is an important vision problem with its use cases being extended to various domains. Despite the multitude of complexity involved, ethnicity identification comes naturally to humans. This meta information can be leveraged to make several decisions, be it in target marketing or security. With the recent development of intelligent systems a sub module to efficiently capture ethnicity would be useful in several use cases. Several attempts to identify an ideal learning model to represent a multi-ethnic dataset have been recorded. A comparative study of classifiers such as support vector machines, logistic regression has been documented. Experimental results indicate that the logical classifier provides a much accurate classification than the support vector machine.

  6. Impact resistance of materials for guards on cutting machine tools--requirements in future European safety standards.

    PubMed

    Mewes, D; Trapp, R P

    2000-01-01

    Guards on machine tools are meant to protect operators from injuries caused by tools, workpieces, and fragments hurled out of the machine's working zone. This article presents the impact resistance requirements, which guards according to European safety standards for machine tools must satisfy. Based upon these standards the impact resistance of different guard materials was determined using cylindrical steel projectiles. Polycarbonate proves to be a suitable material for vision panels because of its high energy absorption capacity. The impact resistance of 8-mm thick polycarbonate is roughly equal to that of a 3-mm thick steel sheet Fe P01. The limited ageing stability, however, makes it necessary to protect polycarbonate against cooling lubricants by means of additional panes on both sides.

  7. Implementation of a robotic flexible assembly system

    NASA Technical Reports Server (NTRS)

    Benton, Ronald C.

    1987-01-01

    As part of the Intelligent Task Automation program, a team developed enabling technologies for programmable, sensory controlled manipulation in unstructured environments. These technologies include 2-D/3-D vision sensing and understanding, force sensing and high speed force control, 2.5-D vision alignment and control, and multiple processor architectures. The subsequent design of a flexible, programmable, sensor controlled robotic assembly system for small electromechanical devices is described using these technologies and ongoing implementation and integration efforts. Using vision, the system picks parts dumped randomly in a tray. Using vision and force control, it performs high speed part mating, in-process monitoring/verification of expected results and autonomous recovery from some errors. It is programmed off line with semiautomatic action planning.

  8. AsTeRICS.

    PubMed

    Drajsajtl, Tomáš; Struk, Petr; Bednárová, Alice

    2013-01-01

    AsTeRICS - "The Assistive Technology Rapid Integration & Construction Set" is a construction set for assistive technologies which can be adapted to the motor abilities of end-users. AsTeRICS allows access to different devices such as PCs, cell phones and smart home devices, with all of them integrated in a platform adapted as much as possible to each user. People with motor disabilities in the upper limbs, with no cognitive impairment, no perceptual limitations (neither visual nor auditory) and with basic skills in using technologies such as PCs, cell phones, electronic agendas, etc. have available a flexible and adaptable technology which enables them to access the Human-Machine-Interfaces (HMI) on the standard desktop and beyond. AsTeRICS provides graphical model design tools, a middleware and hardware support for the creation of tailored AT-solutions involving bioelectric signal acquisition, Brain-/Neural Computer Interfaces, Computer-Vision techniques and standardized actuator and device controls and allows combining several off-the-shelf AT-devices in every desired combination. Novel, end-user ready solutions can be created and adapted via a graphical editor without additional programming efforts. The AsTeRICS open-source framework provides resources for utilization and extension of the system to developers and researches. AsTeRICS was developed by the AsTeRICS project and was partially funded by EC.

  9. Social energy: mining energy from the society

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jun Jason; Gao, David Wenzhong; Zhang, Yingchen

    The inherent nature of energy, i.e., physicality, sociality and informatization, implies the inevitable and intensive interaction between energy systems and social systems. From this perspective, we define 'social energy' as a complex sociotechnical system of energy systems, social systems and the derived artificial virtual systems which characterize the intense intersystem and intra-system interactions. The recent advancement in intelligent technology, including artificial intelligence and machine learning technologies, sensing and communication in Internet of Things technologies, and massive high performance computing and extreme-scale data analytics technologies, enables the possibility of substantial advancement in socio-technical system optimization, scheduling, control and management. In thismore » paper, we provide a discussion on the nature of energy, and then propose the concept and intention of social energy systems for electrical power. A general methodology of establishing and investigating social energy is proposed, which is based on the ACP approach, i.e., 'artificial systems' (A), 'computational experiments' (C) and 'parallel execution' (P), and parallel system methodology. A case study on the University of Denver (DU) campus grid is provided and studied to demonstrate the social energy concept. In the concluding remarks, we discuss the technical pathway, in both social and nature sciences, to social energy, and our vision on its future.« less

  10. Categorization of extraneous matter in cotton using machine vision systems

    USDA-ARS?s Scientific Manuscript database

    The Cotton Trash Identification System (CTIS) developed at the Southwestern Cotton Ginning Research Laboratory was evaluated for identification and categorization of extraneous matter in cotton. The CTIS bark/grass categorization was evaluated with USDA-Agricultural Marketing Service (AMS) extraneou...

  11. Hyperspectral imaging for nondestructive evaluation of tomatoes

    USDA-ARS?s Scientific Manuscript database

    Machine vision methods for quality and defect evaluation of tomatoes have been studied for online sorting and robotic harvesting applications. We investigated the use of a hyperspectral imaging system for quality evaluation and defect detection for tomatoes. Hyperspectral reflectance images were a...

  12. Technology of machine tools. Volume 1. Executive summary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sutton, G.P.

    1980-10-01

    The Machine Tool Task Force (MTTF) was formed to characterize the state of the art of machine tool technology and to identify promising future directions of this technology. This volume is one of a five-volume series that presents the MTTF findings; reports on various areas of the technology were contributed by experts in those areas.

  13. 75 FR 60141 - International Business Machines (IBM), Global Technology Services Delivery Division, Including On...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-29

    ... 25, 2010, applicable to workers of International Business Machines (IBM), Global Technology Services... hereby issued as follows: All workers of International Business Machines (IBM), Global Technology... DEPARTMENT OF LABOR Employment and Training Administration [TA-W-74,164] International Business...

  14. Going Below Minimums: The Efficacy of Display Enhanced/Synthetic Vision Fusion for Go-Around Decisions during Non-Normal Operations

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Bailey, Randall E.

    2007-01-01

    The use of enhanced vision systems in civil aircraft is projected to increase rapidly as the Federal Aviation Administration recently changed the aircraft operating rules under Part 91, revising the flight visibility requirements for conducting approach and landing operations. Operators conducting straight-in instrument approach procedures may now operate below the published approach minimums when using an approved enhanced flight vision system that shows the required visual references on the pilot's Head-Up Display. An experiment was conducted to evaluate the complementary use of synthetic vision systems and enhanced vision system technologies, focusing on new techniques for integration and/or fusion of synthetic and enhanced vision technologies and crew resource management while operating under these newly adopted rules. Experimental results specific to flight crew response to non-normal events using the fused synthetic/enhanced vision system are presented.

  15. Intelligent Vision On The SM9O Mini-Computer Basis And Applications

    NASA Astrophysics Data System (ADS)

    Hawryszkiw, J.

    1985-02-01

    Distinction has to be made between image processing and vision Image processing finds its roots in the strong tradition of linear signal processing and promotes geometrical transform techniques, such as fi I tering , compression, and restoration. Its purpose is to transform an image for a human observer to easily extract from that image information significant for him. For example edges after a gradient operator, or a specific direction after a directional filtering operation. Image processing consists in fact in a set of local or global space-time transforms. The interpretation of the final image is done by the human observer. The purpose of vision is to extract the semantic content of the image. The machine can then understand that content, and run a process of decision, which turns into an action. Thus, intel I i gent vision depends on - Image processing - Pattern recognition - Artificial intel I igence

  16. Images Revealing More Than a Thousand Words

    NASA Technical Reports Server (NTRS)

    2003-01-01

    A unique sensor developed by ProVision Technologies, a NASA Commercial Space Center housed by the Institute for Technology Development, produces hyperspectral images with cutting-edge applications in food safety, skin health, forensics, and anti-terrorism activities. While hyperspectral imaging technology continues to make advances with ProVision Technologies, it has also been transferred to the commercial sector through a spinoff company, Photon Industries, Inc.

  17. Multiple Optical Filter Design Simulation Results

    NASA Astrophysics Data System (ADS)

    Mendelsohn, J.; Englund, D. C.

    1986-10-01

    In this paper we continue our investigation of the application of matched filters to robotic vision problems. Specifically, we are concerned with the tray-picking problem. Our principal interest in this paper is the examination of summation affects which arise from attempting to reduce the matched filter memory size by averaging of matched filters. While the implementation of matched filtering theory to applications in pattern recognition or machine vision is ideally through the use of optics and optical correlators, in this paper the results were obtained through a digital simulation of the optical process.

  18. Research on axisymmetric aspheric surface numerical design and manufacturing technology

    NASA Astrophysics Data System (ADS)

    Wang, Zhen-zhong; Guo, Yin-biao; Lin, Zheng

    2006-02-01

    The key technology for aspheric machining offers exact machining path and machining aspheric lens with high accuracy and efficiency, in spite of the development of traditional manual manufacturing into nowadays numerical control (NC) machining. This paper presents a mathematical model between virtual cone and aspheric surface equations, and discusses the technology of uniform wear of grinding wheel and error compensation in aspheric machining. Finally, a software system for high precision aspheric surface manufacturing is designed and realized, based on the mentioned above. This software system can work out grinding wheel path according to input parameters and generate machining NC programs of aspheric surfaces.

  19. Before They Can Speak, They Must Know.

    ERIC Educational Resources Information Center

    Cromie, William J.; Edson, Lee

    1984-01-01

    Intelligent relationships with people are among the goals for tomorrow's computers. Knowledge-based systems used and being developed to achieve these goals are discussed. Automatic learning, producing inferences, parallelism, program languages, friendly machines, computer vision, and biomodels are among the topics considered. (JN)

  20. Robust crop and weed segmentation under uncontrolled outdoor illumination

    USDA-ARS?s Scientific Manuscript database

    A new machine vision for weed detection was developed from RGB color model images. Processes included in the algorithm for the detection were excessive green conversion, threshold value computation by statistical analysis, adaptive image segmentation by adjusting the threshold value, median filter, ...

  1. Teaching Camera Calibration by a Constructivist Methodology

    ERIC Educational Resources Information Center

    Samper, D.; Santolaria, J.; Pastor, J. J.; Aguilar, J. J.

    2010-01-01

    This article describes the Metrovisionlab simulation software and practical sessions designed to teach the most important machine vision camera calibration aspects in courses for senior undergraduate students. By following a constructivist methodology, having received introductory theoretical classes, students use the Metrovisionlab application to…

  2. CATEGORIZATION OF EXTRANEOUS MATTER IN COTTON USING MACHINE VISION SYSTEMS

    USDA-ARS?s Scientific Manuscript database

    The Cotton Trash Identification System (CTIS) was developed at the Southwestern Cotton Ginning Research Laboratory to identify and categorize extraneous matter in cotton. The CTIS bark/grass categorization was evaluated with USDA-Agricultural Marketing Service (AMS) extraneous matter calls assigned ...

  3. Camera positioning and calibration techniques for integrating traffic surveillance video systems with machine-vision vehicle detection devices.

    DOT National Transportation Integrated Search

    2002-12-01

    The Virginia Department of Transportation, like many other transportation agencies, has invested significantly in extensive closed circuit television (CCTV) systems to monitor freeways in urban areas. Although these systems have proven very effective...

  4. 75 FR 70691 - International Game Technology (IGT), Machine Accounting and ABS (Bonusing and BEII), Engineering...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-18

    ... DEPARTMENT OF LABOR Employment and Training Administration [TA-W-73,477] International Game..., applicable to workers of International Game Technology (IGT), Machine Accounting and ABS (Bonusing and BEII... International Game Technology (IGT), Machine Accounting and ABS (Bonusing and BEII), and Engineering. The...

  5. Iris features-based heart disease diagnosis by computer vision

    NASA Astrophysics Data System (ADS)

    Nguchu, Benedictor A.; Li, Li

    2017-07-01

    The study takes advantage of several new breakthroughs in computer vision technology to develop a new mid-irisbiomedical platform that processes iris image for early detection of heart-disease. Guaranteeing early detection of heart disease provides a possibility of having non-surgical treatment as suggested by biomedical researchers and associated institutions. However, our observation discovered that, a clinical practicable solution which could be both sensible and specific for early detection is still lacking. Due to this, the rate of majority vulnerable to death is highly increasing. The delayed diagnostic procedures, inefficiency, and complications of available methods are the other reasons for this catastrophe. Therefore, this research proposes the novel IFB (Iris Features Based) method for diagnosis of premature, and early stage heart disease. The method incorporates computer vision and iridology to obtain a robust, non-contact, nonradioactive, and cost-effective diagnostic tool. The method analyzes abnormal inherent weakness in tissues, change in color and patterns, of a specific region of iris that responds to impulses of heart organ as per Bernard Jensen-iris Chart. The changes in iris infer the presence of degenerative abnormalities in heart organ. These changes are precisely detected and analyzed by IFB method that includes, tensor-based-gradient(TBG), multi orientations gabor filters(GF), textural oriented features(TOF), and speed-up robust features(SURF). Kernel and Multi class oriented support vector machines classifiers are used for classifying normal and pathological iris features. Experimental results demonstrated that the proposed method, not only has better diagnostic performance, but also provides an insight for early detection of other diseases.

  6. Reservoir Maintenance and Development Task Report for the DOE Geothermal Technologies Office GeoVision Study.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lowry, Thomas Stephen; Finger, John T.; Carrigan, Charles R.

    This report documents the key findings from the Reservoir Maintenance and Development (RM&D) Task of the U.S. Department of Energy's (DOE), Geothermal Technologies Office (GTO) Geothermal Vision Study (GeoVision Study). The GeoVision Study had the objective of conducting analyses of future geothermal growth based on sets of current and future geothermal technology developments. The RM&D Task is one of seven tasks within the GeoVision Study with the others being, Exploration and Confirmation, Potential to Penetration, Institutional Market Barriers, Environmental and Social Impacts, Thermal Applications, and Hybrid Systems. The full set of findings and the details of the GeoVision Study canmore » be found in the final GeoVision Study report on the DOE-GTO website. As applied here, RM&D refers to the activities associated with developing, exploiting, and maintaining a known geothermal resource. It assumes that the site has already been vetted and that the resource has been evaluated to be of sufficient quality to move towards full-scale development. It also assumes that the resource is to be developed for power generation, as opposed to low-temperature or direct use applications. This document presents the key factors influencing RM&D from both a technological and operational standpoint and provides a baseline of its current state. It also looks forward to describe areas of research and development that must be pursued if the development geothermal energy is to reach its full potential.« less

  7. Visions 2020.2: Student Views on Transforming Education and Training through Advanced Technologies

    ERIC Educational Resources Information Center

    US Department of Education, 2004

    2004-01-01

    The U.S. Departments of Commerce and Education (who co-chair the NSTC Working Group) and NetDay formed a partnership aimed at analyzing K-12 student views about technology for learning. These views are analyzed in this second report, "Visions 2020.2: Student Views on Transforming Education and Training Through Advanced Technologies." In…

  8. Framing Innovation: Does an Instructional Vision Help Superintendents Gain Acceptance for a Large-Scale Technology Initiative?

    ERIC Educational Resources Information Center

    Flanagan, Gina E.

    2014-01-01

    There is limited research that outlines how a superintendent's instructional vision can help to gain acceptance of a large-scale technology initiative. This study explored how superintendents gain acceptance for a large-scale technology initiative (specifically a 1:1 device program) through various leadership actions. The role of the instructional…

  9. Compact Microscope Imaging System with Intelligent Controls

    NASA Technical Reports Server (NTRS)

    McDowell, Mark

    2004-01-01

    The figure presents selected views of a compact microscope imaging system (CMIS) that includes a miniature video microscope, a Cartesian robot (a computer- controlled three-dimensional translation stage), and machine-vision and control subsystems. The CMIS was built from commercial off-the-shelf instrumentation, computer hardware and software, and custom machine-vision software. The machine-vision and control subsystems include adaptive neural networks that afford a measure of artificial intelligence. The CMIS can perform several automated tasks with accuracy and repeatability . tasks that, heretofore, have required the full attention of human technicians using relatively bulky conventional microscopes. In addition, the automation and control capabilities of the system inherently include a capability for remote control. Unlike human technicians, the CMIS is not at risk of becoming fatigued or distracted: theoretically, it can perform continuously at the level of the best human technicians. In its capabilities for remote control and for relieving human technicians of tedious routine tasks, the CMIS is expected to be especially useful in biomedical research, materials science, inspection of parts on industrial production lines, and space science. The CMIS can automatically focus on and scan a microscope sample, find areas of interest, record the resulting images, and analyze images from multiple samples simultaneously. Automatic focusing is an iterative process: The translation stage is used to move the microscope along its optical axis in a succession of coarse, medium, and fine steps. A fast Fourier transform (FFT) of the image is computed at each step, and the FFT is analyzed for its spatial-frequency content. The microscope position that results in the greatest dispersal of FFT content toward high spatial frequencies (indicating that the image shows the greatest amount of detail) is deemed to be the focal position.

  10. Sensory and Quality Evaluation of Traditional Compared with Power Ultrasound Processed Corn (Zea Mays) Tortilla Chips.

    PubMed

    Janve, Bhaskar; Yang, Wade; Sims, Charles

    2015-06-01

    Power ultrasound reduces the traditional corn steeping time from 18 to 1.5 h during tortilla chips dough (masa) processing. This study sought to examine consumer (n = 99) acceptability and quality of tortilla chips made from the masa by traditional compared with ultrasonic methods. Overall appearance, flavor, and texture acceptability scores were evaluated using a 9-point hedonic scale. The baked chips (process intermediate) before and after frying (finished product) were analyzed using a texture analyzer and machine vision. The texture values were determined using the 3-point bend test using breaking force gradient (BFG), peak breaking force (PBF), and breaking distance (BD). The fracturing properties determined by the crisp fracture support rig using fracture force gradient (FFG), peak fracture force (PFF), and fracture distance (FD). The machine vision evaluated the total surface area, lightness (L), color difference (ΔE), Hue (°h), and Chroma (C*). The results were evaluated by analysis of variance and means were separated using Tukey's test. Machine vision values of L, °h, were higher (P < 0.05) and ΔE was lower (P < 0.05) for fried and L, °h were significantly (P < 0.05) higher for baked chips produced from ultra-sonication as compare to traditional. Baked chips texture for ultra-sonication was significantly higher (P < 0.05) on BFG, BPD, PFF, and FD. Fried tortilla chips texture were higher significantly (P < 0.05) in BFG and PFF for ultra-sonication than traditional processing. However, the instrumental differences were not detected in sensory analysis, concluding possibility of power ultrasound as potential tortilla chips processing aid. © 2015 Institute of Food Technologists®

  11. Cyborg identities and contemporary techno-utopias: adaptations and transformations of the body in the age of nanotechnology.

    PubMed

    Maestrutti, Marina

    2011-01-01

    The possibility of improving the human body through a closer relationship with technology in order to overcome the human species toward new stages of evolution is a constant element of techno-utopian visions, among other transhumanism. This projection to a radical transformation of the body - and mind - as a result of technological action is based on the concepts of adaptation, or non adaptation, of a human being to a world constantly changed by technoscience. The belief is that not only the body has to change, but that identity is not a stable concept. This mobility in the relationship between body and identity is typical of the post human thought, which inherits from the informational model the conviction that the biological embodiment of human is to be regarded as an accident of history rather than as an essential condition of life. Hybridization is therefore valued by the post human thought as a condition which has "made" the human as he is today, and it appears as a fundamental topic in any discourse on nanotechnology, biotechnology and development of human-machine interfaces.

  12. Computer image analysis in caryopses quality evaluation as exemplified by malting barley

    NASA Astrophysics Data System (ADS)

    Koszela, K.; Raba, B.; Zaborowicz, M.; Przybył, K.; Wojcieszak, D.; Czekała, W.; Ludwiczak, A.; Przybylak, A.; Boniecki, P.; Przybył, J.

    2015-07-01

    One of the purposes to employ modern technologies in agricultural and food industry is to increase the efficiency and automation of production processes, which helps improve productive effectiveness of business enterprises, thus making them more competitive. Nowadays, a challenge presents itself for this branch of economy, to produce agricultural and food products characterized by the best parameters in terms of quality, while maintaining optimum production and distribution costs of the processed biological material. Thus, several scientific centers seek to devise new and improved methods and technologies in this field, which will allow to meet the expectations. A new solution, under constant development, is to employ the so-called machine vision which is to replace human work in both quality and quantity evaluation processes. An indisputable advantage of employing the method is keeping the evaluation unbiased while improving its rate and, what is important, eliminating the fatigue factor of the expert. This paper elaborates on the topic of quality evaluation by marking the contamination in malting barley grains using computer image analysis and selected methods of artificial intelligence [4-5].

  13. 6 CFR 37.19 - Machine readable technology on the driver's license or identification card.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ..., States must use the ISO/IEC 15438:2006(E) Information Technology—Automatic identification and data... 6 Domestic Security 1 2011-01-01 2011-01-01 false Machine readable technology on the driver's..., Verification, and Card Issuance Requirements § 37.19 Machine readable technology on the driver's license or...

  14. 6 CFR 37.19 - Machine readable technology on the driver's license or identification card.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., States must use the ISO/IEC 15438:2006(E) Information Technology—Automatic identification and data... 6 Domestic Security 1 2010-01-01 2010-01-01 false Machine readable technology on the driver's..., Verification, and Card Issuance Requirements § 37.19 Machine readable technology on the driver's license or...

  15. Fusion of Synthetic and Enhanced Vision for All-Weather Commercial Aviation Operations

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Kramer, Lynda J.; Prinzel, Lawrence, III

    2007-01-01

    NASA is developing revolutionary crew-vehicle interface technologies that strive to proactively overcome aircraft safety barriers that would otherwise constrain the full realization of the next-generation air transportation system. A piloted simulation experiment was conducted to evaluate the complementary use of Synthetic and Enhanced Vision technologies. Specific focus was placed on new techniques for integration and/or fusion of Enhanced and Synthetic Vision and its impact within a two-crew flight deck during low visibility approach and landing operations. Overall, the experimental data showed that significant improvements in situation awareness, without concomitant increases in workload and display clutter, could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying. During non-normal operations, the ability of the crew to handle substantial navigational errors and runway incursions were not adversely impacted by the display concepts although the addition of Enhanced Vision did not, unto itself, provide an improvement in runway incursion detection.

  16. Therapist-Assisted Rehabilitation of Visual Function and Hemianopia after Brain Injury: Intervention Study on the Effect of the Neuro Vision Technology Rehabilitation Program.

    PubMed

    Rasmussen, Rune Skovgaard; Schaarup, Anne Marie Heltoft; Overgaard, Karsten

    2018-02-27

    Serious and often lasting vision impairments affect 30% to 35% of people following stroke. Vision may be considered the most important sense in humans, and even smaller permanent injuries can drastically reduce quality of life. Restoration of visual field impairments occur only to a small extent during the first month after brain damage, and therefore the time window for spontaneous improvements is limited. One month after brain injury causing visual impairment, patients usually will experience chronically impaired vision and the need for compensatory vision rehabilitation is substantial. The purpose of this study is to investigate whether rehabilitation with Neuro Vision Technology will result in a significant and lasting improvement in functional capacity in persons with chronic visual impairments after brain injury. Improving eyesight is expected to increase both physical and mental functioning, thus improving the quality of life. This is a prospective open label trial in which participants with chronic visual field impairments are examined before and after the intervention. Participants typically suffer from stroke or traumatic brain injury and will be recruited from hospitals and The Institute for the Blind and Partially Sighted. Treatment is based on Neuro Vision Technology, which is a supervised training course, where participants are trained in compensatory techniques using specially designed equipment. Through the Neuro Vision Technology procedure, the vision problems of each individual are carefully investigated, and personal data is used to organize individual training sessions. Cognitive face-to-face assessments and self-assessed questionnaires about both life and vision quality are also applied before and after the training. Funding was provided in June 2017. Results are expected to be available in 2020. Sample size is calculated to 23 participants. Due to age, difficulty in transport, and the time-consuming intervention, up to 25% dropouts are expected; thus, we aim to include at least 29 participants. This investigation will evaluate the effects of Neuro Vision Technology therapy on compensatory vision rehabilitation. Additionally, quality of life and cognitive improvements associated to increased quality of life will be explored. ClinicalTrials.gov NCT03160131; https://clinicaltrials.gov/ct2/show/NCT03160131 (Archived by WebCite at http://www.webcitation.org/6x3f5HnCv). ©Rune Skovgaard Rasmussen, Anne Marie Heltoft Schaarup, Karsten Overgaard. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 27.02.2018.

  17. Machine-vision-based roadway health monitoring and assessment : development of a shape-based pavement-crack-detection approach.

    DOT National Transportation Integrated Search

    2016-01-01

    State highway agencies (SHAs) routinely employ semi-automated and automated image-based methods for network-level : pavement-cracking data collection, and there are different types of pavement-cracking data collected by SHAs for reporting and : manag...

  18. A Starter's Guide to Artificial Intelligence.

    ERIC Educational Resources Information Center

    McConnell, Barry A.; McConnell, Nancy J.

    1988-01-01

    Discussion of the history and development of artificial intelligence (AI) highlights a bibliography of introductory books on various aspects of AI, including AI programing; problem solving; automated reasoning; game playing; natural language; expert systems; machine learning; robotics and vision; critics of AI; and representative software. (LRW)

  19. Detection of eviscerated poultry spleen enlargement by machine vision

    NASA Astrophysics Data System (ADS)

    Tao, Yang; Shao, June J.; Skeeles, John K.; Chen, Yud-Ren

    1999-01-01

    The size of a poultry spleen is an indication of whether the bird is wholesomeness or has a virus-related disease. This study explored the possibility of detecting poultry spleen enlargement with a computer imaging system to assist human inspectors in food safety inspections. Images of 45-day-old hybrid turkey internal viscera were taken using fluorescent and UV lighting systems. Image processing algorithms including linear transformation, morphological operations, and statistical analyses were developed to distinguish the spleen from its surroundings and then to detect abnormal spleens. Experimental results demonstrated that the imaging method could effectively distinguish spleens from other organ and intestine. Based on a total sample of 57 birds, the classification rates were 92% from a self-test set, and 95% from an independent test set for the correct detection of normal and abnormal birds. The methodology indicated the feasibility of using automated machine vision systems in the future to inspect internal organs and check the wholesomeness of poultry carcasses.

  20. Vision Guided Intelligent Robot Design And Experiments

    NASA Astrophysics Data System (ADS)

    Slutzky, G. D.; Hall, E. L.

    1988-02-01

    The concept of an intelligent robot is an important topic combining sensors, manipulators, and artificial intelligence to design a useful machine. Vision systems, tactile sensors, proximity switches and other sensors provide the elements necessary for simple game playing as well as industrial applications. These sensors permit adaption to a changing environment. The AI techniques permit advanced forms of decision making, adaptive responses, and learning while the manipulator provides the ability to perform various tasks. Computer languages such as LISP and OPS5, have been utilized to achieve expert systems approaches in solving real world problems. The purpose of this paper is to describe several examples of visually guided intelligent robots including both stationary and mobile robots. Demonstrations will be presented of a system for constructing and solving a popular peg game, a robot lawn mower, and a box stacking robot. The experience gained from these and other systems provide insight into what may be realistically expected from the next generation of intelligent machines.

  1. FAIR principles and the IEDB: short-term improvements and a long-term vision of OBO-foundry mediated machine-actionable interoperability

    PubMed Central

    Vita, Randi; Overton, James A; Mungall, Christopher J; Sette, Alessandro

    2018-01-01

    Abstract The Immune Epitope Database (IEDB), at www.iedb.org, has the mission to make published experimental data relating to the recognition of immune epitopes easily available to the scientific public. By presenting curated data in a searchable database, we have liberated it from the tables and figures of journal articles, making it more accessible and usable by immunologists. Recently, the principles of Findability, Accessibility, Interoperability and Reusability have been formulated as goals that data repositories should meet to enhance the usefulness of their data holdings. We here examine how the IEDB complies with these principles and identify broad areas of success, but also areas for improvement. We describe short-term improvements to the IEDB that are being implemented now, as well as a long-term vision of true ‘machine-actionable interoperability’, which we believe will require community agreement on standardization of knowledge representation that can be built on top of the shared use of ontologies. PMID:29688354

  2. Machine vision based teleoperation aid

    NASA Technical Reports Server (NTRS)

    Hoff, William A.; Gatrell, Lance B.; Spofford, John R.

    1991-01-01

    When teleoperating a robot using video from a remote camera, it is difficult for the operator to gauge depth and orientation from a single view. In addition, there are situations where a camera mounted for viewing by the teleoperator during a teleoperation task may not be able to see the tool tip, or the viewing angle may not be intuitive (requiring extensive training to reduce the risk of incorrect or dangerous moves by the teleoperator). A machine vision based teleoperator aid is presented which uses the operator's camera view to compute an object's pose (position and orientation), and then overlays onto the operator's screen information on the object's current and desired positions. The operator can choose to display orientation and translation information as graphics and/or text. This aid provides easily assimilated depth and relative orientation information to the teleoperator. The camera may be mounted at any known orientation relative to the tool tip. A preliminary experiment with human operators was conducted and showed that task accuracies were significantly greater with than without this aid.

  3. Some distinguishing characteristics of contour and texture phenomena in images

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J.

    1992-01-01

    The development of generalized contour/texture discrimination techniques is a central element necessary for machine vision recognition and interpretation of arbitrary images. Here, the visual perception of texture, selected studies of texture analysis in machine vision, and diverse small samples of contour and texture are all used to provide insights into the fundamental characteristics of contour and texture. From these, an experimental discrimination scheme is developed and tested on a battery of natural images. The visual perception of texture defined fine texture as a subclass which is interpreted as shading and is distinct from coarse figural similarity textures. Also, perception defined the smallest scale for contour/texture discrimination as eight to nine visual acuity units. Three contour/texture discrimination parameters were found to be moderately successful for this scale discrimination: (1) lightness change in a blurred version of the image, (2) change in lightness change in the original image, and (3) percent change in edge counts relative to local maximum.

  4. KENNEDY SPACE CENTER, FLA. - Center Director Jim Kennedy talks to students in Garland V. Stewart Magnet Middle School, a NASA Explorer School (NES) in Tampa, Fla. Kennedy made the trip with NASA astronaut Kay Hire to share the agency’s new vision for space exploration with the next generation of explorers. Kennedy is talking with students about our destiny as explorers, NASA’s stepping stone approach to exploring Earth, the Moon, Mars and beyond, how space impacts our lives, and how people and machines rely on each other in space.

    NASA Image and Video Library

    2004-02-20

    KENNEDY SPACE CENTER, FLA. - Center Director Jim Kennedy talks to students in Garland V. Stewart Magnet Middle School, a NASA Explorer School (NES) in Tampa, Fla. Kennedy made the trip with NASA astronaut Kay Hire to share the agency’s new vision for space exploration with the next generation of explorers. Kennedy is talking with students about our destiny as explorers, NASA’s stepping stone approach to exploring Earth, the Moon, Mars and beyond, how space impacts our lives, and how people and machines rely on each other in space.

  5. KENNEDY SPACE CENTER, FLA. - Astronaut Kay Hire talks to students in Garland V. Stewart Magnet Middle School, a NASA Explorer School (NES) in Tampa, Fla. She joined Center Director Jim Kennedy in sharing the agency’s new vision for space exploration with the next generation of explorers. Kennedy is talking with students about our destiny as explorers, NASA’s stepping stone approach to exploring Earth, the Moon, Mars and beyond, how space impacts our lives, and how people and machines rely on each other in space.

    NASA Image and Video Library

    2004-02-20

    KENNEDY SPACE CENTER, FLA. - Astronaut Kay Hire talks to students in Garland V. Stewart Magnet Middle School, a NASA Explorer School (NES) in Tampa, Fla. She joined Center Director Jim Kennedy in sharing the agency’s new vision for space exploration with the next generation of explorers. Kennedy is talking with students about our destiny as explorers, NASA’s stepping stone approach to exploring Earth, the Moon, Mars and beyond, how space impacts our lives, and how people and machines rely on each other in space.

  6. KENNEDY SPACE CENTER, FLA. - Students at Garland V. Stewart Magnet Middle School, a NASA Explorer School (NES) in Tampa, Fla., listen attentively to astronaut Kay Hire. She and Center Director Jim Kennedy were at the school to share the agency’s new vision for space exploration with the next generation of explorers. Kennedy is talking with students about our destiny as explorers, NASA’s stepping stone approach to exploring Earth, the Moon, Mars and beyond, how space impacts our lives, and how people and machines rely on each other in space.

    NASA Image and Video Library

    2004-02-20

    KENNEDY SPACE CENTER, FLA. - Students at Garland V. Stewart Magnet Middle School, a NASA Explorer School (NES) in Tampa, Fla., listen attentively to astronaut Kay Hire. She and Center Director Jim Kennedy were at the school to share the agency’s new vision for space exploration with the next generation of explorers. Kennedy is talking with students about our destiny as explorers, NASA’s stepping stone approach to exploring Earth, the Moon, Mars and beyond, how space impacts our lives, and how people and machines rely on each other in space.

  7. Identification by machine vision of the rate of motor activity decline as a lifespan predictor in C. elegans

    PubMed Central

    Hsu, Ao-Lin; Feng, Zhaoyang; Hsieh, Meng-Yin; Xu, X. Z. Shawn

    2009-01-01

    One challenge in aging research concerns identifying physiological parameters or biomarkers that can reflect the physical health of an animal and predict its lifespan. In C. elegans, a model organism widely used in aging research, motor deficits develop in old worms. Here we employed machine vision to quantify worm locomotion behavior throughout lifespan. We confirm that aging worms undergo a progressive decline in motor activity, beginning in early life. Importantly, the rate of motor activity decline rather than the absolute motor activity in the early-to-mid life of individual worms in an isogenic population inversely correlates with their lifespan, and thus may serve as a lifespan predictor. Long-lived mutant strains with deficits in insulin/IGF-1 signaling or food intake display a reduction in the rate of motor activity decline, suggesting that this parameter might also be used for across-strain comparison of healthspan. Our work identifies an endogenous physiological parameter for lifespan prediction and healthspan comparison. PMID:18255194

  8. Identification by machine vision of the rate of motor activity decline as a lifespan predictor in C. elegans.

    PubMed

    Hsu, Ao-Lin; Feng, Zhaoyang; Hsieh, Meng-Yin; Xu, X Z Shawn

    2009-09-01

    One challenge in aging research concerns identifying physiological parameters or biomarkers that can reflect the physical health of an animal and predict its lifespan. In C. elegans, a model organism widely used in aging research, motor deficits develop in old worms. Here we employed machine vision to quantify worm locomotion behavior throughout lifespan. We confirm that aging worms undergo a progressive decline in motor activity, beginning in early life. Importantly, the rate of motor activity decline rather than the absolute motor activity in the early-to-mid life of individual worms in an isogenic population inversely correlates with their lifespan, and thus may serve as a lifespan predictor. Long-lived mutant strains with deficits in insulin/IGF-1 signaling or food intake display a reduction in the rate of motor activity decline, suggesting that this parameter might also be used for across-strain comparison of healthspan. Our work identifies an endogenous physiological parameter for lifespan prediction and healthspan comparison.

  9. Activity Recognition in Egocentric video using SVM, kNN and Combined SVMkNN Classifiers

    NASA Astrophysics Data System (ADS)

    Sanal Kumar, K. P.; Bhavani, R., Dr.

    2017-08-01

    Egocentric vision is a unique perspective in computer vision which is human centric. The recognition of egocentric actions is a challenging task which helps in assisting elderly people, disabled patients and so on. In this work, life logging activity videos are taken as input. There are 2 categories, first one is the top level and second one is second level. Here, the recognition is done using the features like Histogram of Oriented Gradients (HOG), Motion Boundary Histogram (MBH) and Trajectory. The features are fused together and it acts as a single feature. The extracted features are reduced using Principal Component Analysis (PCA). The features that are reduced are provided as input to the classifiers like Support Vector Machine (SVM), k nearest neighbor (kNN) and combined Support Vector Machine (SVM) and k Nearest Neighbor (kNN) (combined SVMkNN). These classifiers are evaluated and the combined SVMkNN provided better results than other classifiers in the literature.

  10. Prediction of pork loin quality using online computer vision system and artificial intelligence model.

    PubMed

    Sun, Xin; Young, Jennifer; Liu, Jeng-Hung; Newman, David

    2018-06-01

    The objective of this project was to develop a computer vision system (CVS) for objective measurement of pork loin under industry speed requirement. Color images of pork loin samples were acquired using a CVS. Subjective color and marbling scores were determined according to the National Pork Board standards by a trained evaluator. Instrument color measurement and crude fat percentage were used as control measurements. Image features (18 color features; 1 marbling feature; 88 texture features) were extracted from whole pork loin color images. Artificial intelligence prediction model (support vector machine) was established for pork color and marbling quality grades. The results showed that CVS with support vector machine modeling reached the highest prediction accuracy of 92.5% for measured pork color score and 75.0% for measured pork marbling score. This research shows that the proposed artificial intelligence prediction model with CVS can provide an effective tool for predicting color and marbling in the pork industry at online speeds. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Fish swarm intelligent to optimize real time monitoring of chips drying using machine vision

    NASA Astrophysics Data System (ADS)

    Hendrawan, Y.; Hawa, L. C.; Damayanti, R.

    2018-03-01

    This study attempted to apply machine vision-based chips drying monitoring system which is able to optimise the drying process of cassava chips. The objective of this study is to propose fish swarm intelligent (FSI) optimization algorithms to find the most significant set of image features suitable for predicting water content of cassava chips during drying process using artificial neural network model (ANN). Feature selection entails choosing the feature subset that maximizes the prediction accuracy of ANN. Multi-Objective Optimization (MOO) was used in this study which consisted of prediction accuracy maximization and feature-subset size minimization. The results showed that the best feature subset i.e. grey mean, L(Lab) Mean, a(Lab) energy, red entropy, hue contrast, and grey homogeneity. The best feature subset has been tested successfully in ANN model to describe the relationship between image features and water content of cassava chips during drying process with R2 of real and predicted data was equal to 0.9.

  12. Transforming American Education

    ERIC Educational Resources Information Center

    Horn, Michael B.; Mackey, Katherine

    2011-01-01

    In this article the authors accept as a given the National Education Technology Plan's vision of a transformed education system powered by technology such that learners receive personalized and engaging learning experiences, and where assessment, teaching, infrastructure, and productivity are redefined. The article analyzes this vision of a…

  13. Evaluation of Fused Synthetic and Enhanced Vision Display Concepts for Low-Visibility Approach and Landing

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Kramer, Lynda J.; Prinzel, Lawrence J., III; Wilz, Susan J.

    2009-01-01

    NASA is developing revolutionary crew-vehicle interface technologies that strive to proactively overcome aircraft safety barriers that would otherwise constrain the full realization of the next generation air transportation system. A piloted simulation experiment was conducted to evaluate the complementary use of Synthetic and Enhanced Vision technologies. Specific focus was placed on new techniques for integration and/or fusion of Enhanced and Synthetic Vision and its impact within a two-crew flight deck during low-visibility approach and landing operations. Overall, the experimental data showed that significant improvements in situation awareness, without concomitant increases in workload and display clutter, could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying. Improvements in lateral path control performance were realized when the Head-Up Display concepts included a tunnel, independent of the imagery (enhanced vision or fusion of enhanced and synthetic vision) presented with it. During non-normal operations, the ability of the crew to handle substantial navigational errors and runway incursions were neither improved nor adversely impacted by the display concepts. The addition of Enhanced Vision may not, of itself, provide an improvement in runway incursion detection without being specifically tailored for this application.

  14. Gait disorder rehabilitation using vision and non-vision based sensors: A systematic review

    PubMed Central

    Ali, Asraf; Sundaraj, Kenneth; Ahmad, Badlishah; Ahamed, Nizam; Islam, Anamul

    2012-01-01

    Even though the amount of rehabilitation guidelines has never been greater, uncertainty continues to arise regarding the efficiency and effectiveness of the rehabilitation of gait disorders. This question has been hindered by the lack of information on accurate measurements of gait disorders. Thus, this article reviews the rehabilitation systems for gait disorder using vision and non-vision sensor technologies, as well as the combination of these. All papers published in the English language between 1990 and June, 2012 that had the phrases “gait disorder” “rehabilitation”, “vision sensor”, or “non vision sensor” in the title, abstract, or keywords were identified from the SpringerLink, ELSEVIER, PubMed, and IEEE databases. Some synonyms of these phrases and the logical words “and” “or” and “not” were also used in the article searching procedure. Out of the 91 published articles found, this review identified 84 articles that described the rehabilitation of gait disorders using different types of sensor technologies. This literature set presented strong evidence for the development of rehabilitation systems using a markerless vision-based sensor technology. We therefore believe that the information contained in this review paper will assist the progress of the development of rehabilitation systems for human gait disorders. PMID:22938548

  15. Seminar for High School Students “Practice on Manufacturing Technology by Advanced Machine Tools”

    NASA Astrophysics Data System (ADS)

    Marui, Etsuo; Yamawaki, Masao; Taga, Yuken; Omoto, Ken'ichi; Miyaji, Reiji; Ogura, Takahiro; Tsubata, Yoko; Sakai, Toshimasa

    The seminar ‘Practice on Manufacturing Technology by Advanced Machine Tools’ for high school students was held at the supporting center for technology education of Gifu University, under the sponsorship of the Japan Society of Mechanical Engineers. This seminar was held, hoping that many students become interested in manufacturing through the experience of the seminar. Operating CNC milling machine and CNC wire-cut electric discharge machine, they made original nameplates. Participants made the program to control CNC machine tools themselves. In this report, some valuable results obtained through such experience are explained.

  16. Using human brain activity to guide machine learning.

    PubMed

    Fong, Ruth C; Scheirer, Walter J; Cox, David D

    2018-03-29

    Machine learning is a field of computer science that builds algorithms that learn. In many cases, machine learning algorithms are used to recreate a human ability like adding a caption to a photo, driving a car, or playing a game. While the human brain has long served as a source of inspiration for machine learning, little effort has been made to directly use data collected from working brains as a guide for machine learning algorithms. Here we demonstrate a new paradigm of "neurally-weighted" machine learning, which takes fMRI measurements of human brain activity from subjects viewing images, and infuses these data into the training process of an object recognition learning algorithm to make it more consistent with the human brain. After training, these neurally-weighted classifiers are able to classify images without requiring any additional neural data. We show that our neural-weighting approach can lead to large performance gains when used with traditional machine vision features, as well as to significant improvements with already high-performing convolutional neural network features. The effectiveness of this approach points to a path forward for a new class of hybrid machine learning algorithms which take both inspiration and direct constraints from neuronal data.

  17. Deployment and evaluation of a dual-sensor autofocusing method for on-machine measurement of patterns of small holes on freeform surfaces.

    PubMed

    Chen, Xiaomei; Longstaff, Andrew; Fletcher, Simon; Myers, Alan

    2014-04-01

    This paper presents and evaluates an active dual-sensor autofocusing system that combines an optical vision sensor and a tactile probe for autofocusing on arrays of small holes on freeform surfaces. The system has been tested on a two-axis test rig and then integrated onto a three-axis computer numerical control (CNC) milling machine, where the aim is to rapidly and controllably measure the hole position errors while the part is still on the machine. The principle of operation is for the tactile probe to locate the nominal positions of holes, and the optical vision sensor follows to focus and capture the images of the holes. The images are then processed to provide hole position measurement. In this paper, the autofocusing deviations are analyzed. First, the deviations caused by the geometric errors of the axes on which the dual-sensor unit is deployed are estimated to be 11 μm when deployed on a test rig and 7 μm on the CNC machine tool. Subsequently, the autofocusing deviations caused by the interaction of the tactile probe, surface, and small hole are mathematically analyzed and evaluated. The deviations are a result of the tactile probe radius, the curvatures at the positions where small holes are drilled on the freeform surface, and the effect of the position error of the hole on focusing. An example case study is provided for the measurement of a pattern of small holes on an elliptical cylinder on the two machines. The absolute sum of the autofocusing deviations is 118 μm on the test rig and 144 μm on the machine tool. This is much less than the 500 μm depth of field of the optical microscope. Therefore, the method is capable of capturing a group of clear images of the small holes on this workpiece for either implementation.

  18. Automation and robotics technology for intelligent mining systems

    NASA Technical Reports Server (NTRS)

    Welsh, Jeffrey H.

    1989-01-01

    The U.S. Bureau of Mines is approaching the problems of accidents and efficiency in the mining industry through the application of automation and robotics to mining systems. This technology can increase safety by removing workers from hazardous areas of the mines or from performing hazardous tasks. The short-term goal of the Automation and Robotics program is to develop technology that can be implemented in the form of an autonomous mining machine using current continuous mining machine equipment. In the longer term, the goal is to conduct research that will lead to new intelligent mining systems that capitalize on the capabilities of robotics. The Bureau of Mines Automation and Robotics program has been structured to produce the technology required for the short- and long-term goals. The short-term goal of application of automation and robotics to an existing mining machine, resulting in autonomous operation, is expected to be accomplished within five years. Key technology elements required for an autonomous continuous mining machine are well underway and include machine navigation systems, coal-rock interface detectors, machine condition monitoring, and intelligent computer systems. The Bureau of Mines program is described, including status of key technology elements for an autonomous continuous mining machine, the program schedule, and future work. Although the program is directed toward underground mining, much of the technology being developed may have applications for space systems or mining on the Moon or other planets.

  19. Office of Biological and Physical Research: Overview Transitioning to the Vision for Space Exploration

    NASA Technical Reports Server (NTRS)

    Crouch, Roger

    2004-01-01

    Viewgraphs on NASA's transition to its vision for space exploration is presented. The topics include: 1) Strategic Directives Guiding the Human Support Technology Program; 2) Progressive Capabilities; 3) A Journey to Inspire, Innovate, and Discover; 4) Risk Mitigation Status Technology Readiness Level (TRL) and Countermeasures Readiness Level (CRL); 5) Biological And Physical Research Enterprise Aligning With The Vision For U.S. Space Exploration; 6) Critical Path Roadmap Reference Missions; 7) Rating Risks; 8) Current Critical Path Roadmap (Draft) Rating Risks: Human Health; 9) Current Critical Path Roadmap (Draft) Rating Risks: System Performance/Efficiency; 10) Biological And Physical Research Enterprise Efforts to Align With Vision For U.S. Space Exploration; 11) Aligning with the Vision: Exploration Research Areas of Emphasis; 12) Code U Efforts To Align With The Vision For U.S. Space Exploration; 13) Types of Critical Path Roadmap Risks; and 14) ISS Human Support Systems Research, Development, and Demonstration. A summary discussing the vision for U.S. space exploration is also provided.

  20. Understanding dental CAD/CAM for restorations - dental milling machines from a mechanical engineering viewpoint. Part A: chairside milling machines.

    PubMed

    Lebon, Nicolas; Tapie, Laurent; Duret, Francois; Attal, Jean-Pierre

    2016-01-01

    The dental milling machine is an important device in the dental CAD/CAM chain. Nowadays, dental numerical controlled (NC) milling machines are available for dental surgeries (chairside solution). This article provides a mechanical engineering approach to NC milling machines to help dentists understand the involvement of technology in digital dentistry practice. First, some technical concepts and definitions associated with NC milling machines are described from a mechanical engineering viewpoint. The technical and economic criteria of four chairside dental NC milling machines that are available on the market are then described. The technical criteria are focused on the capacities of the embedded technologies of these milling machines to mill both prosthetic materials and types of shape restorations. The economic criteria are focused on investment costs and interoperability with third-party software. The clinical relevance of the technology is assessed in terms of the accuracy and integrity of the restoration.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bates, Robert; McConnell, Elizabeth

    Machining methods across many industries generally require multiple operations to machine and process advanced materials, features with micron precision, and complex shapes. The resulting multiple machining platforms can significantly affect manufacturing cycle time and the precision of the final parts, with a resultant increase in cost and energy consumption. Ultrafast lasers represent a transformative and disruptive technology that removes material with micron precision and in a single step manufacturing process. Such precision results from athermal ablation without modification or damage to the remaining material which is the key differentiator between ultrafast laser technologies and traditional laser technologies or mechanical processes.more » Athermal ablation without modification or damage to the material eliminates post-processing or multiple manufacturing steps. Combined with the appropriate technology to control the motion of the work piece, ultrafast lasers are excellent candidates to provide breakthrough machining capability for difficult-to-machine materials. At the project onset in early 2012, the project team recognized that substantial effort was necessary to improve the application of ultrafast laser and precise motion control technologies (for micromachining difficult-to-machine materials) to further the aggregate throughput and yield improvements over conventional machining methods. The project described in this report advanced these leading-edge technologies thru the development and verification of two platforms: a hybrid enhanced laser chassis and a multi-application testbed.« less

  2. Heuristics in primary care for recognition of unreported vision loss in older people: a technology development study.

    PubMed

    Wijeyekoon, Skanda; Kharicha, Kalpa; Iliffe, Steve

    2015-09-01

    To evaluate heuristics (rules of thumb) for recognition of undetected vision loss in older patients in primary care. Vision loss is associated with ageing, and its prevalence is increasing. Visual impairment has a broad impact on health, functioning and well-being. Unrecognised vision loss remains common, and screening interventions have yet to reduce its prevalence. An alternative approach is to enhance practitioners' skills in recognising undetected vision loss, by having a more detailed picture of those who are likely not to act on vision changes, report symptoms or have eye tests. This paper describes a qualitative technology development study to evaluate heuristics for recognition of undetected vision loss in older patients in primary care. Using a previous modelling study, two heuristics in the form of mnemonics were developed to aid pattern recognition and allow general practitioners to identify potential cases of unreported vision loss. These heuristics were then analysed with experts. Findings It was concluded that their implementation in modern general practice was unsuitable and an alternative solution should be sort.

  3. KSC-04PD-1998

    NASA Technical Reports Server (NTRS)

    2004-01-01

    KENNEDY SPACE CENTER, FLA. (From left) Dr. Julian Earls, director of NASA Glenn Research Center, astronaut Leland Melvin, Sara Thompson, team lead, and KSC Deputy Director Dr. Woodrow Whitlow Jr. pose for a photo at Ronald E. McNair High School in Atlanta, a NASA Explorer School, after a presentation. Dr. Whitlow visited the school to share The vision for space exploration with the next generation of explorers. Whitlow talked with students about our destiny as explorers, NASAs stepping stone approach to exploring Earth, the Moon, Mars and beyond, how space impacts our lives, and how people and machines rely on each other in space. Dr. Earls discussed the future and the vision for space, plus the NASA careers needed to meet the vision. Melvin talked about the importance of teamwork and what it takes for mission success.

  4. KSC-04PD-1988

    NASA Technical Reports Server (NTRS)

    2004-01-01

    KENNEDY SPACE CENTER, FLA. Dr. Julian Earls, director of the NASA Glenn Research Center, talks to students at Ronald E. McNair High School in Atlanta, a NASA Explorer School. He accompanied KSC Deputy Director Dr. Woodrow Whitlow Jr., who is visiting to the school to share the vision for space exploration with the next generation of explorers. Dr. Earls discussed the future and the vision for space, plus the NASA careers needed to meet the vision. Astronaut Leland Melvin (far right) accompanied Whitlow, talking with students about the importance of teamwork and what it takes for mission success. Whitlow talked with students about our destiny as explorers, NASAs stepping stone approach to exploring Earth, the Moon, Mars and beyond, how space impacts our lives, and how people and machines rely on each other in space.

  5. Fuzzy Petri nets to model vision system decisions within a flexible manufacturing system

    NASA Astrophysics Data System (ADS)

    Hanna, Moheb M.; Buck, A. A.; Smith, R.

    1994-10-01

    The paper presents a Petri net approach to modelling, monitoring and control of the behavior of an FMS cell. The FMS cell described comprises a pick and place robot, vision system, CNC-milling machine and 3 conveyors. The work illustrates how the block diagrams in a hierarchical structure can be used to describe events at different levels of abstraction. It focuses on Fuzzy Petri nets (Fuzzy logic with Petri nets) including an artificial neural network (Fuzzy Neural Petri nets) to model and control vision system decisions and robot sequences within an FMS cell. This methodology can be used as a graphical modelling tool to monitor and control the imprecise, vague and uncertain situations, and determine the quality of the output product of an FMS cell.

  6. Automatic Welding System of Aluminum Pipe by Monitoring Backside Image of Molten Pool Using Vision Sensor

    NASA Astrophysics Data System (ADS)

    Baskoro, Ario Sunar; Kabutomori, Masashi; Suga, Yasuo

    An automatic welding system using Tungsten Inert Gas (TIG) welding with vision sensor for welding of aluminum pipe was constructed. This research studies the intelligent welding process of aluminum alloy pipe 6063S-T5 in fixed position and moving welding torch with the AC welding machine. The monitoring system consists of a vision sensor using a charge-coupled device (CCD) camera to monitor backside image of molten pool. The captured image was processed to recognize the edge of molten pool by image processing algorithm. Neural network model for welding speed control were constructed to perform the process automatically. From the experimental results it shows the effectiveness of the control system confirmed by good detection of molten pool and sound weld of experimental result.

  7. Robotic Technology: An Assessment and Forecast,

    DTIC Science & Technology

    1984-07-01

    Research Associates# Inc. Dr. Roger Nagel# Lehigh University Dr. Charles Rosen# Machine Intelligence Corporations and Mr. Jack Thornton# Robot Insider...amr (Subcontractors: systems for assembly and Adopt Technology# inspection Stanford University. SRI) AFSC MANTECH o McDonnell Douglas o Machine ...supervisory controls man- machine interaction and system integration. - .. _ - Foreign R& The U.S. faces a strong technological challenge in robotics from

  8. Machine Tool Advanced Skills Technology (MAST). Common Ground: Toward a Standards-Based Training System for the U.S. Machine Tool and Metal Related Industries. Volume 1: Executive Summary, of a 15-Volume Set of Skills Standards and Curriculum Training Materials for the Precision Manufacturing Industry.

    ERIC Educational Resources Information Center

    Texas State Technical Coll., Waco.

    The Machine Tool Advanced Skills Technology (MAST) consortium was formed to address the shortage of skilled workers for the machine tools and metals-related industries. Featuring six of the nation's leading advanced technology centers, the MAST consortium developed, tested, and disseminated industry-specific skill standards and model curricula for…

  9. Plan for conducting an international machine tool task force

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sutton, G.P.; McClure, E.R.; Schuman, J.F.

    1978-08-28

    The basic objectives of the Machine Tool Task Force (MTTF) are to characterize and summarize the state of the art of cutting machine tool technology and to identify promising areas of future R and D. These goals will be accomplished with a series of multidisciplinary teams of prominent experts and individuals experienced in the specialized technologies of machine tools or in the management of machine tool operations. Experts will be drawn from all areas of the machine tool community: machine tool users or buyer organizations, builders, and R and D establishments including universities and government laboratories, both domestic and foreign.more » A plan for accomplishing this task is presented. The area of machine tool technology has been divided into about two dozen technology subjects on which teams of one or more experts will work. These teams are, in turn, organized into four principal working groups dealing, respectively, with machine tool accuracy, mechanics, control, and management systems/utilization. Details are presented on specific subjects to be covered, the organization of the Task Force and its four working groups, and the basic approach to determining the state of the art of technology and the future directions of this technology. The planned review procedure, the potential benefits, our management approach, and the schedule, as well as the key participating personnel and their background are discussed. The initial meeting of MTTF members will be held at a plenary session on October 16 and 17, 1978, in Scottsdale, AZ. The MTTF study will culminate in a conference on September 1, 1980, in Chicago, IL, immediately preceeding the 1980 International Machine Tool Show. At this time, our results will be released to the public; a series of reports will be published in late 1980.« less

  10. A Vision for Ice Giant Exploration

    NASA Technical Reports Server (NTRS)

    Hofstadter, M.; Simon, A.; Atreya, S.; Banfield, D.; Fortney, J.; Hayes, A.; Hedman, M.; Hospodarsky, G.; Mandt, K.; Masters, A.; hide

    2017-01-01

    From Voyager to a Vision for 2050: NASA and ESA have just completed a study of candidate missionsto Uranus and Neptune, the so-called ice giant planets. It is a Pre-Decadal Survey Study, meant to inform the next Planetary Science Decadal Survey about opportunities for missions launching in the 2020's and early 2030's. There have been no space flight missions to the ice giants since the Voyager 2 flybys of Uranus in 1986 and Neptune in 1989. This paper presents some conclusions of that study (hereafter referred to as The Study), and how the results feed into a vision for where planetary science can be in 2050. Reaching that vision will require investments in technology andground-based science in the 2020's, flight during the 2030's along with continued technological development of both ground- and space-based capabilities, and data analysis and additional flights in the 2040's. We first discuss why exploring the ice giants is important. We then summarize the science objectives identified by The Study, and our vision of the science goals for 2050. We then review some of the technologies needed to make this vision a reality.

  11. Laser projection positioning of spatial contour curves via a galvanometric scanner

    NASA Astrophysics Data System (ADS)

    Tu, Junchao; Zhang, Liyan

    2018-04-01

    The technology of laser projection positioning is widely applied in advanced manufacturing fields (e.g. composite plying, parts location and installation). In order to use it better, a laser projection positioning (LPP) system is designed and implemented. Firstly, the LPP system is built by a laser galvanometric scanning (LGS) system and a binocular vision system. Applying Single-hidden Layer Feed-forward Neural Network (SLFN), the system model is constructed next. Secondly, the LGS system and the binocular system, which are respectively independent, are integrated through a datadriven calibration method based on extreme learning machine (ELM) algorithm. Finally, a projection positioning method is proposed within the framework of the calibrated SLFN system model. A well-designed experiment is conducted to verify the viability and effectiveness of the proposed system. In addition, the accuracy of projection positioning are evaluated to show that the LPP system can achieves the good localization effect.

  12. Using blackmail, bribery, and guilt to address the tragedy of the virtual intellectual commons

    NASA Astrophysics Data System (ADS)

    Griffith, P. C.; Cook, R. B.; Wilson, B. E.; Gentry, M. J.; Horta, L. M.; McGroddy, M.; Morrell, A. L.; Wilcox, L. E.

    2008-12-01

    One goal of the NSF's vision for 21st Century Cyberinfrastructure is to create a virtual intellectual commons for the scientific community where advanced technologies perpetuate transformation of this community's productivity and capabilities. The metadata describing scientific observations, like the first paragraph of a news story, should answer the questions who? what? why? where? when? and how?, making them discoverable, comprehensible, contextualized, exchangeable, and machine-readable. Investigators who create good scientific metadata increase the scientific value of their observations within such a virtual intellectual commons. But the tragedy of this commons arises when investigators wish to receive without giving in return. The authors of this talk will describe how they have used combinations of blackmail, bribery, and guilt to motivate good behavior by investigators participating in two major scientific programs (NASA's component of the Large-scale Biosphere-Atmosphere Experiment in Amazonia; and the US Climate Change Science Program's North American Carbon Program).

  13. Minimalism context-aware displays.

    PubMed

    Cai, Yang

    2004-12-01

    Despite the rapid development of cyber technologies, today we still have very limited attention and communication bandwidth to process the increasing information flow. The goal of the study is to develop a context-aware filter to match the information load with particular needs and capacities. The functions include bandwidth-resolution trade-off and user context modeling. From the empirical lab studies, it is found that the resolution of images can be reduced in order of magnitude if the viewer knows that he/she is looking for particular features. The adaptive display queue is optimized with real-time operational conditions and user's inquiry history. Instead of measuring operator's behavior directly, ubiquitous computing models are developed to anticipate user's behavior from the operational environment data. A case study of the video stream monitoring for transit security is discussed in the paper. In addition, the author addresses the future direction of coherent human-machine vision systems.

  14. Human-Robot Control Strategies for the NASA/DARPA Robonaut

    NASA Technical Reports Server (NTRS)

    Diftler, M. A.; Culbert, Chris J.; Ambrose, Robert O.; Huber, E.; Bluethmann, W. J.

    2003-01-01

    The Robotic Systems Technology Branch at the NASA Johnson Space Center (JSC) is currently developing robot systems to reduce the Extra-Vehicular Activity (EVA) and planetary exploration burden on astronauts. One such system, Robonaut, is capable of interfacing with external Space Station systems that currently have only human interfaces. Robonaut is human scale, anthropomorphic, and designed to approach the dexterity of a space-suited astronaut. Robonaut can perform numerous human rated tasks, including actuating tether hooks, manipulating flexible materials, soldering wires, grasping handrails to move along space station mockups, and mating connectors. More recently, developments in autonomous control and perception for Robonaut have enabled dexterous, real-time man-machine interaction. Robonaut is now capable of acting as a practical autonomous assistant to the human, providing and accepting tools by reacting to body language. A versatile, vision-based algorithm for matching range silhouettes is used for monitoring human activity as well as estimating tool pose.

  15. TACT glossary: technoscience.

    PubMed

    Tambone, V; Pennacchini, M

    2010-01-01

    The term technoscience (T) indicates the complex interactions between contemporary science and technology, that have become practically inseparable. From an epistemological point of view, T only considers the quantitative knowledge in a reductionist way. Nature has been reduced to a machine that works according to laws learnt through the experimental science. At present, technical efficiency represents an operational dominion on Nature; it gives the power to those who possess it. Scientists, considered as visionaries, have the assignment to lead society. They create new cosmos-visions that are technocentric, thus Ts use the human being as subject of experimentation and they transform some essential dimensions of the human being. All this suggests the necessity of an ethical evaluation of the action integration of different subjects in what we call integrated action. This configuration involves ethical obligations for the agent: he/she has to act preserving and allowing the collaboration, and respecting the professional's individual responsibility.

  16. VR for Mars Pathfinder

    NASA Technical Reports Server (NTRS)

    Blackmon, Theodore

    1998-01-01

    Virtual reality (VR) technology has played an integral role for Mars Pathfinder mission, operations Using an automated machine vision algorithm, the 3d topography of the Martian surface was rapidly recovered fro -a the stereo images captured. by the Tender camera to produce photo-realistic 3d models, An advanced, interface was developed for visualization and interaction with. the virtual environment of the Pathfinder landing site for mission scientists at the Space Flight Operations Facility of the Jet Propulsion Laboratory. The VR aspect of the display allowed mission scientists to navigate on Mars in Bud while remaining here on Earth, thus improving their spatial awareness of the rock field that surrounds the lenders Measurements of positions, distances and angles could be easily extracted from the topographic models, providing valuable information for science analysis and mission. planning. Moreover, the VR map of Mars has also been used to assist with the archiving and planning of activities for the Sojourner rover.

  17. Development of on line automatic separation device for apple and sleeve

    NASA Astrophysics Data System (ADS)

    Xin, Dengke; Ning, Duo; Wang, Kangle; Han, Yuhang

    2018-04-01

    Based on STM32F407 single chip microcomputer as control core, automatic separation device of fruit sleeve is designed. This design consists of hardware and software. In hardware, it includes mechanical tooth separator and three degree of freedom manipulator, as well as industrial control computer, image data acquisition card, end effector and other structures. The software system is based on Visual C++ development environment, to achieve localization and recognition of fruit sleeve with the technology of image processing and machine vision, drive manipulator of foam net sets of capture, transfer, the designated position task. Test shows: The automatic separation device of the fruit sleeve has the advantages of quick response speed and high separation success rate, and can realize separation of the apple and plastic foam sleeve, and lays the foundation for further studying and realizing the application of the enterprise production line.

  18. LEDs as light source: examining quality of acquired images

    NASA Astrophysics Data System (ADS)

    Bachnak, Rafic; Funtanilla, Jeng; Hernandez, Jose

    2004-05-01

    Recent advances in technology have made light emitting diodes (LEDs) viable in a number of applications, including vehicle stoplights, traffic lights, machine-vision-inspection, illumination, and street signs. This paper presents the results of comparing images taken by a videoscope using two different light sources. One of the sources is the internal metal halide lamp and the other is a LED placed at the tip of the insertion tube. Images acquired using these two light sources were quantitatively compared using their histogram, intensity profile along a line segment, and edge detection. Also, images were qualitatively compared using image registration and transformation. The gray-level histogram, edge detection, image profile and image registration do not offer conclusive results. The LED light source, however, produces good images for visual inspection by an operator. The paper will present the results and discuss the usefulness and shortcomings of various comparison methods.

  19. Quantifying Pilot Visual Attention in Low Visibility Terminal Operations

    NASA Technical Reports Server (NTRS)

    Ellis, Kyle K.; Arthur, J. J.; Latorella, Kara A.; Kramer, Lynda J.; Shelton, Kevin J.; Norman, Robert M.; Prinzel, Lawrence J.

    2012-01-01

    Quantifying pilot visual behavior allows researchers to determine not only where a pilot is looking and when, but holds implications for specific behavioral tracking when these data are coupled with flight technical performance. Remote eye tracking systems have been integrated into simulators at NASA Langley with effectively no impact on the pilot environment. This paper discusses the installation and use of a remote eye tracking system. The data collection techniques from a complex human-in-the-loop (HITL) research experiment are discussed; especially, the data reduction algorithms and logic to transform raw eye tracking data into quantified visual behavior metrics, and analysis methods to interpret visual behavior. The findings suggest superior performance for Head-Up Display (HUD) and improved attentional behavior for Head-Down Display (HDD) implementations of Synthetic Vision System (SVS) technologies for low visibility terminal area operations. Keywords: eye tracking, flight deck, NextGen, human machine interface, aviation

  20. A Review of Imaging Techniques for Plant Phenotyping

    PubMed Central

    Li, Lei; Zhang, Qin; Huang, Danfeng

    2014-01-01

    Given the rapid development of plant genomic technologies, a lack of access to plant phenotyping capabilities limits our ability to dissect the genetics of quantitative traits. Effective, high-throughput phenotyping platforms have recently been developed to solve this problem. In high-throughput phenotyping platforms, a variety of imaging methodologies are being used to collect data for quantitative studies of complex traits related to the growth, yield and adaptation to biotic or abiotic stress (disease, insects, drought and salinity). These imaging techniques include visible imaging (machine vision), imaging spectroscopy (multispectral and hyperspectral remote sensing), thermal infrared imaging, fluorescence imaging, 3D imaging and tomographic imaging (MRT, PET and CT). This paper presents a brief review on these imaging techniques and their applications in plant phenotyping. The features used to apply these imaging techniques to plant phenotyping are described and discussed in this review. PMID:25347588

  1. Shotgun Optical Maps of the Whole Escherichia coli O157:H7 Genome

    PubMed Central

    Lim, Alex; Dimalanta, Eileen T.; Potamousis, Konstantinos D.; Yen, Galex; Apodoca, Jennifer; Tao, Chunhong; Lin, Jieyi; Qi, Rong; Skiadas, John; Ramanathan, Arvind; Perna, Nicole T.; Plunkett, Guy; Burland, Valerie; Mau, Bob; Hackett, Jeremiah; Blattner, Frederick R.; Anantharaman, Thomas S.; Mishra, Bhubaneswar; Schwartz, David C.

    2001-01-01

    We have constructed NheI and XhoI optical maps of Escherichia coli O157:H7 solely from genomic DNA molecules to provide a uniquely valuable scaffold for contig closure and sequence validation. E. coli O157:H7 is a common pathogen found in contaminated food and water. Our approach obviated the need for the analysis of clones, PCR products, and hybridizations, because maps were constructed from ensembles of single DNA molecules. Shotgun sequencing of bacterial genomes remains labor-intensive, despite advances in sequencing technology. This is partly due to manual intervention required during the last stages of finishing. The applicability of optical mapping to this problem was enhanced by advances in machine vision techniques that improved mapping throughput and created a path to full automation of mapping. Comparisons were made between maps and sequence data that characterized sequence gaps and guided nascent assemblies. PMID:11544203

  2. Plant features measurements for robotics

    NASA Technical Reports Server (NTRS)

    Miles, Gaines E.

    1989-01-01

    Initial studies of the technical feasibility of using machine vision and color image processing to measure plant health were performed. Wheat plants were grown in nutrient solutions deficient in nitrogen, potassium, and iron. An additional treatment imposed water stress on wheat plants which received a full complement of nutrients. The results for juvenile (less than 2 weeks old) wheat plants show that imaging technology can be used to detect nutrient deficiencies. The relative amount of green color in a leaf declined with increased water stress. The absolute amount of green was higher for nitrogen deficient leaves compared to the control plants. Relative greenness was lower for iron deficient leaves, but the absolute green values were higher. The data showed patterns across the leaf consistent with visual symptons. The development of additional color image processing routines to recognize these patterns would improve the performance of this sensor of plant health.

  3. LED lighting for use in multispectral and hyperspectral imaging

    USDA-ARS?s Scientific Manuscript database

    Lighting for machine vision and hyperspectral imaging is an important component for collecting high quality imagery. However, it is often given minimal consideration in the overall design of an imaging system. Tungsten-halogens lamps are the most common source of illumination for broad spectrum appl...

  4. Evaluation and implementation of a machine vision system to categorize extraneous matter in cotton

    USDA-ARS?s Scientific Manuscript database

    The Cotton Trash Identification System (CTIS) developed at the Southwestern Cotton Ginning Research Laboratory was evaluated for identification and categorization of extraneous matter (EM) in cotton. The system’s categorization of trash objects in cotton images was evaluated against Agricultural Mar...

  5. Prediction of firmness and soluble solids content of blueberries using hyperspectral reflectance imaging

    USDA-ARS?s Scientific Manuscript database

    Currently, blueberries are inspected and sorted by color, size and/or firmness (or softness) in packinghouses, using different inspection techniques like machine vision and mechanical vibration or impact. A new inspection technique is needed for effectively assessing both external features and inter...

  6. Buildings of the Future Scoping Study: A Framework for Vision Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Na; Goins, John D.

    2015-02-01

    The Buildings of the Future Scoping Study, funded by the U.S. Department of Energy (DOE) Building Technologies Office, seeks to develop a vision for what U.S. mainstream commercial and residential buildings could become in 100 years. This effort is not intended to predict the future or develop a specific building design solution. Rather, it will explore future building attributes and offer possible pathways of future development. Whether we achieve a more sustainable built environment depends not just on technologies themselves, but on how effectively we envision the future and integrate these technologies in a balanced way that generates economic, social,more » and environmental value. A clear, compelling vision of future buildings will attract the right strategies, inspire innovation, and motivate action. This project will create a cross-disciplinary forum of thought leaders to share their views. The collective views will be integrated into a future building vision and published in September 2015. This report presents a research framework for the vision development effort based on a literature survey and gap analysis. This document has four objectives. First, it defines the project scope. Next, it identifies gaps in the existing visions and goals for buildings and discusses the possible reasons why some visions did not work out as hoped. Third, it proposes a framework to address those gaps in the vision development. Finally, it presents a plan for a series of panel discussions and interviews to explore a vision that mitigates problems with past building paradigms while addressing key areas that will affect buildings going forward.« less

  7. Vision Science and Technology at NASA: Results of a Workshop

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Editor); Mulligan, Jeffrey B. (Editor)

    1990-01-01

    A broad review is given of vision science and technology within NASA. The subject is defined and its applications in both NASA and the nation at large are noted. A survey of current NASA efforts is given, noting strengths and weaknesses of the NASA program.

  8. Who's Got the Bridge? - Towards Safe, Robust Autonomous Operations at NASA Langley's Autonomy Incubator

    NASA Technical Reports Server (NTRS)

    Allen, B. Danette; Cross, Charles D.; Motter, Mark A.; Neilan, James H.; Qualls, Garry D.; Rothhaar, Paul M.; Tran, Loc; Trujillo, Anna C.; Crisp, Vicki K.

    2015-01-01

    NASA aeronautics research has made decades of contributions to aviation. Both aircraft and air traffic management (ATM) systems in use today contain NASA-developed and NASA sponsored technologies that improve safety and efficiency. Recent innovations in robotics and autonomy for automobiles and unmanned systems point to a future with increased personal mobility and access to transportation, including aviation. Automation and autonomous operations will transform the way we move people and goods. Achieving this mobility will require safe, robust, reliable operations for both the vehicle and the airspace and challenges to this inevitable future are being addressed now in government labs, universities, and industry. These challenges are the focus of NASA Langley Research Center's Autonomy Incubator whose R&D portfolio includes mission planning, trajectory and path planning, object detection and avoidance, object classification, sensor fusion, controls, machine learning, computer vision, human-machine teaming, geo-containment, open architecture design and development, as well as the test and evaluation environment that will be critical to prove system reliability and support certification. Safe autonomous operations will be enabled via onboard sensing and perception systems in both data-rich and data-deprived environments. Applied autonomy will enable safety, efficiency and unprecedented mobility as people and goods take to the skies tomorrow just as we do on the road today.

  9. Noninvasive forward-scattering system for rapid detection, characterization, and identification of Listeria colonies: image processing and data analysis

    NASA Astrophysics Data System (ADS)

    Rajwa, Bartek; Bayraktar, Bulent; Banada, Padmapriya P.; Huff, Karleigh; Bae, Euiwon; Hirleman, E. Daniel; Bhunia, Arun K.; Robinson, J. Paul

    2006-10-01

    Bacterial contamination by Listeria monocytogenes puts the public at risk and is also costly for the food-processing industry. Traditional methods for pathogen identification require complicated sample preparation for reliable results. Previously, we have reported development of a noninvasive optical forward-scattering system for rapid identification of Listeria colonies grown on solid surfaces. The presented system included application of computer-vision and patternrecognition techniques to classify scatter pattern formed by bacterial colonies irradiated with laser light. This report shows an extension of the proposed method. A new scatterometer equipped with a high-resolution CCD chip and application of two additional sets of image features for classification allow for higher accuracy and lower error rates. Features based on Zernike moments are supplemented by Tchebichef moments, and Haralick texture descriptors in the new version of the algorithm. Fisher's criterion has been used for feature selection to decrease the training time of machine learning systems. An algorithm based on support vector machines was used for classification of patterns. Low error rates determined by cross-validation, reproducibility of the measurements, and robustness of the system prove that the proposed technology can be implemented in automated devices for detection and classification of pathogenic bacteria.

  10. Night Vision Camera

    NASA Technical Reports Server (NTRS)

    1996-01-01

    PixelVision, Inc. developed the Night Video NV652 Back-illuminated CCD Camera, based on the expertise of a former Jet Propulsion Laboratory employee and a former employee of Scientific Imaging Technologies, Inc. The camera operates without an image intensifier, using back-illuminated and thinned CCD technology to achieve extremely low light level imaging performance. The advantages of PixelVision's system over conventional cameras include greater resolution and better target identification under low light conditions, lower cost and a longer lifetime. It is used commercially for research and aviation.

  11. Using artificial intelligence to bring evidence-based medicine a step closer to making the individual difference.

    PubMed

    Sissons, B; Gray, W A; Bater, A; Morrey, D

    2007-03-01

    The vision of evidence-based medicine is that of experienced clinicians systematically using the best research evidence to meet the individual patient's needs. This vision remains distant from clinical reality, as no complete methodology exists to apply objective, population-based research evidence to the needs of an individual real-world patient. We describe an approach, based on techniques from machine learning, to bridge this gap between evidence and individual patients in oncology. We examine existing proposals for tackling this gap and the relative benefits and challenges of our proposed, k-nearest-neighbour-based, approach.

  12. Adaptive Feedback in Local Coordinates for Real-time Vision-Based Motion Control Over Long Distances

    NASA Astrophysics Data System (ADS)

    Aref, M. M.; Astola, P.; Vihonen, J.; Tabus, I.; Ghabcheloo, R.; Mattila, J.

    2018-03-01

    We studied the differences in noise-effects, depth-correlated behavior of sensors, and errors caused by mapping between coordinate systems in robotic applications of machine vision. In particular, the highly range-dependent noise densities for semi-unknown object detection were considered. An equation is proposed to adapt estimation rules to dramatic changes of noise over longer distances. This algorithm also benefits the smooth feedback of wheels to overcome variable latencies of visual perception feedback. Experimental evaluation of the integrated system is presented with/without the algorithm to highlight its effectiveness.

  13. Survey on diagnosis of diseases from retinal images

    NASA Astrophysics Data System (ADS)

    Das, Sneha; Malathy, C.

    2018-04-01

    Retina is a thin membranous layer of tissue that occupies at the back of the eye which provides central vision needed for daily routines. Identifying retinal diseases at the early stage is a challenging task since healthy retina is required for central vision. Several retinal diseases affect the eye such as retinal tear, retinal detachment, glaucoma, macular hole and macular degeneration etc. These maladies will encounter a secondary growth in the close future as the age of the person increases. A survey is made which tells about the diagnosis of the retinal diseases from the retinal images using machine learning techniques.

  14. A Machine Learning Approach to Pedestrian Detection for Autonomous Vehicles Using High-Definition 3D Range Data

    PubMed Central

    Navarro, Pedro J.; Fernández, Carlos; Borraz, Raúl; Alonso, Diego

    2016-01-01

    This article describes an automated sensor-based system to detect pedestrians in an autonomous vehicle application. Although the vehicle is equipped with a broad set of sensors, the article focuses on the processing of the information generated by a Velodyne HDL-64E LIDAR sensor. The cloud of points generated by the sensor (more than 1 million points per revolution) is processed to detect pedestrians, by selecting cubic shapes and applying machine vision and machine learning algorithms to the XY, XZ, and YZ projections of the points contained in the cube. The work relates an exhaustive analysis of the performance of three different machine learning algorithms: k-Nearest Neighbours (kNN), Naïve Bayes classifier (NBC), and Support Vector Machine (SVM). These algorithms have been trained with 1931 samples. The final performance of the method, measured a real traffic scenery, which contained 16 pedestrians and 469 samples of non-pedestrians, shows sensitivity (81.2%), accuracy (96.2%) and specificity (96.8%). PMID:28025565

  15. A Machine Learning Approach to Pedestrian Detection for Autonomous Vehicles Using High-Definition 3D Range Data.

    PubMed

    Navarro, Pedro J; Fernández, Carlos; Borraz, Raúl; Alonso, Diego

    2016-12-23

    This article describes an automated sensor-based system to detect pedestrians in an autonomous vehicle application. Although the vehicle is equipped with a broad set of sensors, the article focuses on the processing of the information generated by a Velodyne HDL-64E LIDAR sensor. The cloud of points generated by the sensor (more than 1 million points per revolution) is processed to detect pedestrians, by selecting cubic shapes and applying machine vision and machine learning algorithms to the XY, XZ, and YZ projections of the points contained in the cube. The work relates an exhaustive analysis of the performance of three different machine learning algorithms: k-Nearest Neighbours (kNN), Naïve Bayes classifier (NBC), and Support Vector Machine (SVM). These algorithms have been trained with 1931 samples. The final performance of the method, measured a real traffic scenery, which contained 16 pedestrians and 469 samples of non-pedestrians, shows sensitivity (81.2%), accuracy (96.2%) and specificity (96.8%).

  16. Dynamical Systems and Motion Vision.

    DTIC Science & Technology

    1988-04-01

    TASK Artificial Inteligence Laboratory AREA I WORK UNIT NUMBERS 545 Technology Square . Cambridge, MA 02139 C\\ II. CONTROLLING OFFICE NAME ANO0 ADDRESS...INSTITUTE OF TECHNOLOGY ARTIFICIAL INTELLIGENCE LABORATORY A.I.Memo No. 1037 April, 1988 Dynamical Systems and Motion Vision Joachim Heel Abstract: In this... Artificial Intelligence L3 Laboratory of the Massachusetts Institute of Technology. Support for the Laboratory’s [1 Artificial Intelligence Research is

  17. Micro Fluidic Channel Machining on Fused Silica Glass Using Powder Blasting

    PubMed Central

    Jang, Ho-Su; Cho, Myeong-Woo; Park, Dong-Sam

    2008-01-01

    In this study, micro fluid channels are machined on fused silica glass via powder blasting, a mechanical etching process, and the machining characteristics of the channels are experimentally evaluated. In the process, material removal is performed by the collision of micro abrasives injected by highly compressed air on to the target surface. This approach can be characterized as an integration of brittle mode machining based on micro crack propagation. Fused silica glass, a high purity synthetic amorphous silicon dioxide, is selected as a workpiece material. It has a very low thermal expansion coefficient and excellent optical qualities and exceptional transmittance over a wide spectral range, especially in the ultraviolet range. The powder blasting process parameters affecting the machined results are injection pressure, abrasive particle size and density, stand-off distance, number of nozzle scanning, and shape/size of the required patterns. In this study, the influence of the number of nozzle scanning, abrasive particle size, and pattern size on the formation of micro channels is investigated. Machined shapes and surface roughness are measured using a 3-dimensional vision profiler and the results are discussed. PMID:27879730

  18. Nonlinear programming for classification problems in machine learning

    NASA Astrophysics Data System (ADS)

    Astorino, Annabella; Fuduli, Antonio; Gaudioso, Manlio

    2016-10-01

    We survey some nonlinear models for classification problems arising in machine learning. In the last years this field has become more and more relevant due to a lot of practical applications, such as text and web classification, object recognition in machine vision, gene expression profile analysis, DNA and protein analysis, medical diagnosis, customer profiling etc. Classification deals with separation of sets by means of appropriate separation surfaces, which is generally obtained by solving a numerical optimization model. While linear separability is the basis of the most popular approach to classification, the Support Vector Machine (SVM), in the recent years using nonlinear separating surfaces has received some attention. The objective of this work is to recall some of such proposals, mainly in terms of the numerical optimization models. In particular we tackle the polyhedral, ellipsoidal, spherical and conical separation approaches and, for some of them, we also consider the semisupervised versions.

  19. A new method for getting the three-dimensional curve of the groove of a spectacle frame by optical measuring

    NASA Astrophysics Data System (ADS)

    Rückwardt, M.; Göpfert, A.; Schnellhorn, M.; Correns, M.; Rosenberger, M.; Linß, G.

    2010-07-01

    Precise measuring of spectacle frames is an important field of quality assurance for opticians and their customers. Different supplier and a number of measuring methods are available but all of them are tactile ones. In this paper the possible employment of optical coordinate measuring machines is discussed for detecting the groove of a spectacle frame. The ambient conditions like deviation and measuring time are even multifaceted like quantity of quality characteristics and measuring objects itself and have to be tested. But the main challenge for an optical coordinate measuring machine is the blocked optical path, because the device under test is located behind an undercut. In this case it is necessary to deflect the beam of the machine for example with a rotating plane mirror. In the next step the difficulties of machine vision connecting to the spectacle frame are explained. Finally first results are given.

  20. Commercial Flight Crew Decision-Making during Low-Visibility Approach Operations Using Fused Synthetic/Enhanced Vision Systems

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Bailey, Randall E.; Prinzel, Lawrence J., III

    2007-01-01

    NASA is investigating revolutionary crew-vehicle interface technologies that strive to proactively overcome aircraft safety barriers that would otherwise constrain the full realization of the next-generation air transportation system. A fixed-based piloted simulation experiment was conducted to evaluate the complementary use of Synthetic and Enhanced Vision technologies. Specific focus was placed on new techniques for integration and/or fusion of Enhanced and Synthetic Vision and its impact within a two-crew flight deck on the crew's decision-making process during low-visibility approach and landing operations. Overall, the experimental data showed that significant improvements in situation awareness, without concomitant increases in workload and display clutter, could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying. During non-normal operations, the ability of the crew to handle substantial navigational errors and runway incursions were neither improved nor adversely impacted by the display concepts. The addition of Enhanced Vision may not, unto itself, provide an improvement in runway incursion detection without being specifically tailored for this application. Existing enhanced vision system procedures were effectively used in the crew decision-making process during approach and missed approach operations but having to forcibly transition from an excellent FLIR image to natural vision by 100 ft above field level was awkward for the pilot-flying.

  1. Sensory Aids for the Blind.

    ERIC Educational Resources Information Center

    National Academy of Sciences - National Research Council, Washington, DC. Committee on Prosthetics Research and Development.

    The problems of providing sensory aids for the blind are presented and a report on the present status of aids discusses direct translation and recognition reading machines as well as mobility aids. Aspects of required research considered are the following: assessment of needs; vision, audition, taction, and multimodal communication; reading aids,…

  2. Geometric Invariants and Object Recognition.

    DTIC Science & Technology

    1992-08-01

    University of Chicago Press. Maybank , S.J. [1992], "The Projection of Two Non-coplanar Conics", in Geometric Invariance in Machine Vision, eds. J.L...J.L. Mundy and A. Zisserman, MIT Press, Cambridge, MA. Mundy, J.L., Kapur, .. , Maybank , S.J., and Quan, L. [1992a] "Geometric Inter- pretation of

  3. The Need for Alternative Paradigms in Science and Engineering Education

    ERIC Educational Resources Information Center

    Baggi, Dennis L.

    2007-01-01

    There are two main claims in this article. First, that the classic pillars of engineering education, namely, traditional mathematics and differential equations, are merely a particular, if not old-fashioned, representation of a broader mathematical vision, which spans from Turing machine programming and symbolic productions sets to sub-symbolic…

  4. Artificial Intelligence and the High School Computer Curriculum.

    ERIC Educational Resources Information Center

    Dillon, Richard W.

    1993-01-01

    Describes a four-part curriculum that can serve as a model for incorporating artificial intelligence (AI) into the high school computer curriculum. The model includes examining questions fundamental to AI, creating and designing an expert system, language processing, and creating programs that integrate machine vision with robotics and…

  5. Theater-Level Stochastic Air-to-Air Engagement Modeling via Event Occurrence Networks Using Piecewise Polynomial Approximation

    DTIC Science & Technology

    2001-09-01

    diagnosis natural language understanding circuit fault diagnosis pattern recognition machine vision nancial auditing map learning sensor... ACCA ACCB A ights degree of command and control FCC value is assumed to be the average of all the ACC values of the aircraft in the

  6. ICPR-2016 - International Conference on Pattern Recognition

    Science.gov Websites

    Learning for Scene Understanding" Speakers ICPR2016 PAPER AWARDS Best Piero Zamperoni Student Paper -Paced Dictionary Learning for Cross-Domain Retrieval and Recognition Xu, Dan; Song, Jingkuan; Alameda discussions on recent advances in the fields of Pattern Recognition, Machine Learning and Computer Vision, and

  7. Edge detection

    NASA Astrophysics Data System (ADS)

    Hildreth, E. C.

    1985-09-01

    For both biological systems and machines, vision begins with a large and unwieldly array of measurements of the amount of light reflected from surfaces in the environment. The goal of vision is to recover physical properties of objects in the scene such as the location of object boundaries and the structure, color and texture of object surfaces, from the two-dimensional image that is projected onto the eye or camera. This goal is not achieved in a single step: vision proceeds in stages, with each stage producing increasingly more useful descriptions of the image and then the scene. The first clues about the physical properties of the scene are provided by the changes of intensity in the image. The importance of intensity changes and edges in early visual processing has led to extensive research on their detection, description and use, both in computer and biological vision systems. This article reviews some of the theory that underlies the detection of edges, and the methods used to carry out this analysis.

  8. In Situ Fabrication Technologies: Meeting the Challenge for Exploration

    NASA Technical Reports Server (NTRS)

    Howard, Richard W.

    2005-01-01

    A viewgraph presentation on Lunar and Martian in situ fabrication technologies meeting the challenges for exploration is shown. The topics include: 1) Exploration Vision; 2) Vision Requirements Early in the Program; 3) Vision Requirements Today; 4) Why is ISFR Technology Needed? 5) ISFR and In Situ Resource Utilization (ISRU); 6) Fabrication Feedstock Considerations; 7) Planetary Resource Primer; 8) Average Chemical Element Abundances in Lunar Soil; 9) Chemical Elements in Aerospace Engineering Materials; 10) Schematic of Raw Regolith Processing into Constituent Components; 11) Iron, Aluminum, and Basalt Processing from Separated Elements and Compounds; 12) Space Power Systems; 13) Power Source Applicability; 14) Fabrication Systems Technologies; 15) Repair and Nondestructive Evaluation (NDE); and 16) Habitat Structures. A development overview of Lunar and Martian repair and nondestructive evaluation is also presented.

  9. Mississippi Curriculum Framework for Machine Tool Operation/Machine Shop and Tool and Die Making Technology Cluster (Program CIP: 48.0507--Tool and Die Maker/Technologist) (Program CIP: 48.0503--Machine Shop Assistant). Postsecondary Programs.

    ERIC Educational Resources Information Center

    Mississippi Research and Curriculum Unit for Vocational and Technical Education, State College.

    This document, which is intended for use by community and junior colleges throughout Mississippi, contains curriculum frameworks for the course sequences in the machine tool operation/machine tool and tool and die making technology programs cluster. Presented in the introductory section are a framework of courses and programs, description of the…

  10. Machine Tool Advanced Skills Technology (MAST). Common Ground: Toward a Standards-Based Training System for the U.S. Machine Tool and Metal Related Industries. Volume 4: Manufacturing Engineering Technology, of a 15-Volume Set of Skill Standards and Curriculum Training Materials for the Precision Manufacturing Industry.

    ERIC Educational Resources Information Center

    Texas State Technical Coll., Waco.

    This document is intended to help education and training institutions deliver the Machine Tool Advanced Skills Technology (MAST) curriculum to a variety of individuals and organizations. MAST consists of industry-specific skill standards and model curricula for 15 occupational specialty areas within the U.S. machine tool and metals-related…

  11. Machine Tool Advanced Skills Technology (MAST). Common Ground: Toward a Standards-Based Training System for the U.S. Machine Tool and Metal Related Industries. Volume 7: Industrial Maintenance Technology, of a 15-Volume Set of Skill Standards and Curriculum Training Materials for the Precision Manufacturing Industry.

    ERIC Educational Resources Information Center

    Texas State Technical Coll., Waco.

    This document is intended to help education and training institutions deliver the Machine Tool Advanced Skills Technology (MAST) curriculum to a variety of individuals and organizations. MAST consists of industry-specific skill standards and model curricula for 15 occupational specialty areas within the U.S. machine tool and metals-related…

  12. The 13 th Annual Intelligent Ground Vehicle Competition: intelligent ground vehicles created by intelligent teams

    NASA Astrophysics Data System (ADS)

    Theisen, Bernard L.

    2005-10-01

    The Intelligent Ground Vehicle Competition (IGVC) is one of three, unmanned systems, student competitions that were founded by the Association for Unmanned Vehicle Systems International (AUVSI) in the 1990s. The IGVC is a multidisciplinary exercise in product realization that challenges college engineering student teams to integrate advanced control theory, machine vision, vehicular electronics, and mobile platform fundamentals to design and build an unmanned system. Teams from around the world focus on developing a suite of dual-use technologies to equip ground vehicles of the future with intelligent driving capabilities. Over the past 13 years, the competition has challenged undergraduate, graduate and Ph.D. students with real world applications in intelligent transportation systems, the military and manufacturing automation. To date, teams from over 50 universities and colleges have participated. This paper describes some of the applications of the technologies required by this competition and discusses the educational benefits. The primary goal of the IGVC is to advance engineering education in intelligent vehicles and related technologies. The employment and professional networking opportunities created for students and industrial sponsors through a series of technical events over the three-day competition are highlighted. Finally, an assessment of the competition based on participant feedback is presented.

  13. Robotic general surgery: current practice, evidence, and perspective.

    PubMed

    Jung, M; Morel, P; Buehler, L; Buchs, N C; Hagen, M E

    2015-04-01

    Robotic technology commenced to be adopted for the field of general surgery in the 1990s. Since then, the da Vinci surgical system (Intuitive Surgical Inc, Sunnyvale, CA, USA) has remained by far the most commonly used system in this domain. The da Vinci surgical system is a master-slave machine that offers three-dimensional vision, articulated instruments with seven degrees of freedom, and additional software features such as motion scaling and tremor filtration. The specific design allows hand-eye alignment with intuitive control of the minimally invasive instruments. As such, robotic surgery appears technologically superior when compared with laparoscopy by overcoming some of the technical limitations that are imposed on the surgeon by the conventional approach. This article reviews the current literature and the perspective of robotic general surgery. While robotics has been applied to a wide range of general surgery procedures, its precise role in this field remains a subject of further research. Until now, only limited clinical evidence that could establish the use of robotics as the gold standard for procedures of general surgery has been created. While surgical robotics is still in its infancy with multiple novel systems currently under development and clinical trials in progress, the opportunities for this technology appear endless, and robotics should have a lasting impact to the field of general surgery.

  14. Air and Water System (AWS) Design and Technology Selection for the Vision for Space Exploration

    NASA Technical Reports Server (NTRS)

    Jones, Harry; Kliss, Mark

    2005-01-01

    This paper considers technology selection for the crew air and water recycling systems to be used in long duration human space exploration. The specific objectives are to identify the most probable air and water technologies for the vision for space exploration and to identify the alternate technologies that might be developed. The approach is to conduct a preliminary first cut systems engineering analysis, beginning with the Air and Water System (AWS) requirements and the system mass balance, and then define the functional architecture, review the International Space Station (ISS) technologies, and discuss alternate technologies. The life support requirements for air and water are well known. The results of the mass flow and mass balance analysis help define the system architectural concept. The AWS includes five subsystems: Oxygen Supply, Condensate Purification, Urine Purification, Hygiene Water Purification, and Clothes Wash Purification. AWS technologies have been evaluated in the life support design for ISS node 3, and in earlier space station design studies, in proposals for the upgrade or evolution of the space station, and in studies of potential lunar or Mars missions. The leading candidate technologies for the vision for space exploration are those planned for Node 3 of the ISS. The ISS life support was designed to utilize Space Station Freedom (SSF) hardware to the maximum extent possible. The SSF final technology selection process, criteria, and results are discussed. Would it be cost-effective for the vision for space exploration to develop alternate technology? This paper will examine this and other questions associated with AWS design and technology selection.

  15. Enhanced Vision for All-Weather Operations Under NextGen

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Kramer, Lynda J.; Williams, Steven P.

    2010-01-01

    Recent research in Synthetic/Enhanced Vision technology is analyzed with respect to existing Category II/III performance and certification guidance. The goal is to start the development of performance-based vision systems technology requirements to support future all-weather operations and the NextGen goal of Equivalent Visual Operations. This work shows that existing criteria to operate in Category III weather and visibility are not directly applicable since, unlike today, the primary reference for maneuvering the airplane is based on what the pilot sees visually through the "vision system." New criteria are consequently needed. Several possible criteria are discussed, but more importantly, the factors associated with landing system performance using automatic and manual landings are delineated.

  16. An Introduction to Intelligent Processing Programs Developed by the Air Force Manufacturing Technology Directorate

    NASA Technical Reports Server (NTRS)

    Sampson, Paul G.; Sny, Linda C.

    1992-01-01

    The Air Force has numerous on-going manufacturing and integration development programs (machine tools, composites, metals, assembly, and electronics) which are instrumental in improving productivity in the aerospace industry, but more importantly, have identified strategies and technologies required for the integration of advanced processing equipment. An introduction to four current Air Force Manufacturing Technology Directorate (ManTech) manufacturing areas is provided. Research is being carried out in the following areas: (1) machining initiatives for aerospace subcontractors which provide for advanced technology and innovative manufacturing strategies to increase the capabilities of small shops; (2) innovative approaches to advance machine tool products and manufacturing processes; (3) innovative approaches to advance sensors for process control in machine tools; and (4) efforts currently underway to develop, with the support of industry, the Next Generation Workstation/Machine Controller (Low-End Controller Task).

  17. Cultures in orbit: Satellite technologies, global media and local practice

    NASA Astrophysics Data System (ADS)

    Parks, Lisa Ann

    Since the launch of Sputnik in 1957, satellite technologies have had a profound impact upon cultures around the world. "Cultures in Orbit" examines these seemingly disembodied, distant relay machines in relation to situated social and cultural processes on earth. Drawing upon a range of materials including NASA and UNESCO documents, international satellite television broadcasts, satellite 'development' projects, documentary and science fiction films, remote sensing images, broadcast news footage, World Wide Web sites, and popular press articles I delineate and analyze a series of satellite mediascapes. "Cultures in Orbit" analyzes uses of satellites for live television relay, surveillance, archaeology and astronomy. The project examines such satellite media as the first live global satellite television program Our World, Elvis' Aloha from Hawaii concert, Aboriginal Australian satellite programs, and Star TV's Asian music videos. In addition, the project explores reconnaissance images of mass graves in Bosnia, archaeological satellite maps of Cleopatra's underwater palace in Egypt, and Hubble Space Telescope images. These case studies are linked by a theoretical discussion of the satellite's involvement in shifting definitions of time, space, vision, knowledge and history. The satellite fosters an aesthetic of global realism predicated on instantaneous transnational connections. It reorders linear chronologies by revealing traces of the ancient past on the earth's surface and by searching in deep space for the "edge of time." On earth, the satellite is used to modernize and develop "primitive" societies. Satellites have produced new electronic spaces of international exchange, but they also generate strategic maps that advance Western political and cultural hegemony. By technologizing human vision, the satellite also extends the epistemologies of the visible, the historical and the real. It allows us to see artifacts and activities on earth from new vantage points; it allows us to read the surface of the earth as a text; and it enables us to see beyond the limits of human civilization and into the alien domain of deep space.

  18. Understanding of how older adults with low vision obtain, process, and understand health information and services.

    PubMed

    Kim, Hyung Nam

    2017-10-16

    Twenty-five years after the Americans with Disabilities Act, there has still been a lack of advancement of accessibility in healthcare for people with visual impairments, particularly older adults with low vision. This study aims to advance understanding of how older adults with low vision obtain, process, and use health information and services, and to seek opportunities of information technology to support them. A convenience sample of 10 older adults with low vision participated in semi-structured phone interviews, which were audio-recorded and transcribed verbatim for analysis. Participants shared various concerns in accessing, understanding, and using health information, care services, and multimedia technologies. Two main themes and nine subthemes emerged from the analysis. Due to the concerns, older adults with low vision tended to fail to obtain the full range of all health information and services to meet their specific needs. Those with low vision still rely on residual vision such that multimedia-based information which can be useful, but it should still be designed to ensure its accessibility, usability, and understandability.

  19. Computer vision for automatic inspection of agricultural produce

    NASA Astrophysics Data System (ADS)

    Molto, Enrique; Blasco, Jose; Benlloch, Jose V.

    1999-01-01

    Fruit and vegetables suffer different manipulations from the field to the final consumer. These are basically oriented towards the cleaning and selection of the product in homogeneous categories. For this reason, several research projects, aimed at fast, adequate produce sorting and quality control are currently under development around the world. Moreover, it is possible to find manual and semi- automatic commercial system capable of reasonably performing these tasks.However, in many cases, their accuracy is incompatible with current European market demands, which are constantly increasing. IVIA, the Valencian Research Institute of Agriculture, located in Spain, has been involved in several European projects related with machine vision for real-time inspection of various agricultural produces. This paper will focus on the work related with two products that have different requirements: fruit and olives. In the case of fruit, the Institute has developed a vision system capable of providing assessment of the external quality of single fruit to a robot that also receives information from other senors. The system use four different views of each fruit and has been tested on peaches, apples and citrus. Processing time of each image is under 500 ms using a conventional PC. The system provides information about primary and secondary color, blemishes and their extension, and stem presence and position, which allows further automatic orientation of the fruit in the final box using a robotic manipulator. Work carried out in olives was devoted to fast sorting of olives for consumption at table. A prototype has been developed to demonstrate the feasibility of a machine vision system capable of automatically sorting 2500 kg/h olives using low-cost conventional hardware.

  20. Rapid Prototyping in Technology Education.

    ERIC Educational Resources Information Center

    Flowers, Jim; Moniz, Matt

    2002-01-01

    Describes how technology education majors are using a high-tech model builder, called a fused deposition modeling machine, to develop their models directly from computer-based designs without any machining. Gives examples of applications in technology education. (JOW)

Top