Sample records for automated computer vision

  1. Computer vision in the poultry industry

    USDA-ARS?s Scientific Manuscript database

    Computer vision is becoming increasingly important in the poultry industry due to increasing use and speed of automation in processing operations. Growing awareness of food safety concerns has helped add food safety inspection to the list of tasks that automated computer vision can assist. Researc...

  2. Possible Computer Vision Systems and Automated or Computer-Aided Edging and Trimming

    Treesearch

    Philip A. Araman

    1990-01-01

    This paper discusses research which is underway to help our industry reduce costs, increase product volume and value recovery, and market more accurately graded and described products. The research is part of a team effort to help the hardwood sawmill industry automate with computer vision systems, and computer-aided or computer controlled processing. This paper...

  3. InPRO: Automated Indoor Construction Progress Monitoring Using Unmanned Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Hamledari, Hesam

    In this research, an envisioned automated intelligent robotic solution for automated indoor data collection and inspection that employs a series of unmanned aerial vehicles (UAV), entitled "InPRO", is presented. InPRO consists of four stages, namely: 1) automated path planning; 2) autonomous UAV-based indoor inspection; 3) automated computer vision-based assessment of progress; and, 4) automated updating of 4D building information models (BIM). The works presented in this thesis address the third stage of InPRO. A series of computer vision-based methods that automate the assessment of construction progress using images captured at indoor sites are introduced. The proposed methods employ computer vision and machine learning techniques to detect the components of under-construction indoor partitions. In particular, framing (studs), insulation, electrical outlets, and different states of drywall sheets (installing, plastering, and painting) are automatically detected using digital images. High accuracy rates, real-time performance, and operation without a priori information are indicators of the methods' promising performance.

  4. Operational Assessment of Color Vision

    DTIC Science & Technology

    2016-06-20

    evaluated in this study. 15. SUBJECT TERMS Color vision, aviation, cone contrast test, Colour Assessment & Diagnosis , color Dx, OBVA 16. SECURITY...symbologies are frequently used to aid or direct critical activities such as aircraft landing approaches or railroad right-of-way designations...computer-generated display systems have facilitated the development of computer-based, automated tests of color vision [14,15]. The United Kingdom’s

  5. The use of interactive computer vision and robot hand controllers for enhancing manufacturing safety

    NASA Technical Reports Server (NTRS)

    Marzwell, Neville I.; Jacobus, Charles J.; Peurach, Thomas M.; Mitchell, Brian T.

    1994-01-01

    Current available robotic systems provide limited support for CAD-based model-driven visualization, sensing algorithm development and integration, and automated graphical planning systems. This paper describes ongoing work which provides the functionality necessary to apply advanced robotics to automated manufacturing and assembly operations. An interface has been built which incorporates 6-DOF tactile manipulation, displays for three dimensional graphical models, and automated tracking functions which depend on automated machine vision. A set of tools for single and multiple focal plane sensor image processing and understanding has been demonstrated which utilizes object recognition models. The resulting tool will enable sensing and planning from computationally simple graphical objects. A synergistic interplay between human and operator vision is created from programmable feedback received from the controller. This approach can be used as the basis for implementing enhanced safety in automated robotics manufacturing, assembly, repair and inspection tasks in both ground and space applications. Thus, an interactive capability has been developed to match the modeled environment to the real task environment for safe and predictable task execution.

  6. The use of interactive computer vision and robot hand controllers for enhancing manufacturing safety

    NASA Astrophysics Data System (ADS)

    Marzwell, Neville I.; Jacobus, Charles J.; Peurach, Thomas M.; Mitchell, Brian T.

    1994-02-01

    Current available robotic systems provide limited support for CAD-based model-driven visualization, sensing algorithm development and integration, and automated graphical planning systems. This paper describes ongoing work which provides the functionality necessary to apply advanced robotics to automated manufacturing and assembly operations. An interface has been built which incorporates 6-DOF tactile manipulation, displays for three dimensional graphical models, and automated tracking functions which depend on automated machine vision. A set of tools for single and multiple focal plane sensor image processing and understanding has been demonstrated which utilizes object recognition models. The resulting tool will enable sensing and planning from computationally simple graphical objects. A synergistic interplay between human and operator vision is created from programmable feedback received from the controller. This approach can be used as the basis for implementing enhanced safety in automated robotics manufacturing, assembly, repair and inspection tasks in both ground and space applications. Thus, an interactive capability has been developed to match the modeled environment to the real task environment for safe and predictable task execution.

  7. Computer vision for microscopy diagnosis of malaria.

    PubMed

    Tek, F Boray; Dempster, Andrew G; Kale, Izzet

    2009-07-13

    This paper reviews computer vision and image analysis studies aiming at automated diagnosis or screening of malaria infection in microscope images of thin blood film smears. Existing works interpret the diagnosis problem differently or propose partial solutions to the problem. A critique of these works is furnished. In addition, a general pattern recognition framework to perform diagnosis, which includes image acquisition, pre-processing, segmentation, and pattern classification components, is described. The open problems are addressed and a perspective of the future work for realization of automated microscopy diagnosis of malaria is provided.

  8. Computational Analysis of Behavior.

    PubMed

    Egnor, S E Roian; Branson, Kristin

    2016-07-08

    In this review, we discuss the emerging field of computational behavioral analysis-the use of modern methods from computer science and engineering to quantitatively measure animal behavior. We discuss aspects of experiment design important to both obtaining biologically relevant behavioral data and enabling the use of machine vision and learning techniques for automation. These two goals are often in conflict. Restraining or restricting the environment of the animal can simplify automatic behavior quantification, but it can also degrade the quality or alter important aspects of behavior. To enable biologists to design experiments to obtain better behavioral measurements, and computer scientists to pinpoint fruitful directions for algorithm improvement, we review known effects of artificial manipulation of the animal on behavior. We also review machine vision and learning techniques for tracking, feature extraction, automated behavior classification, and automated behavior discovery, the assumptions they make, and the types of data they work best with.

  9. Machine learning and computer vision approaches for phenotypic profiling.

    PubMed

    Grys, Ben T; Lo, Dara S; Sahin, Nil; Kraus, Oren Z; Morris, Quaid; Boone, Charles; Andrews, Brenda J

    2017-01-02

    With recent advances in high-throughput, automated microscopy, there has been an increased demand for effective computational strategies to analyze large-scale, image-based data. To this end, computer vision approaches have been applied to cell segmentation and feature extraction, whereas machine-learning approaches have been developed to aid in phenotypic classification and clustering of data acquired from biological images. Here, we provide an overview of the commonly used computer vision and machine-learning methods for generating and categorizing phenotypic profiles, highlighting the general biological utility of each approach. © 2017 Grys et al.

  10. Machine learning and computer vision approaches for phenotypic profiling

    PubMed Central

    Morris, Quaid

    2017-01-01

    With recent advances in high-throughput, automated microscopy, there has been an increased demand for effective computational strategies to analyze large-scale, image-based data. To this end, computer vision approaches have been applied to cell segmentation and feature extraction, whereas machine-learning approaches have been developed to aid in phenotypic classification and clustering of data acquired from biological images. Here, we provide an overview of the commonly used computer vision and machine-learning methods for generating and categorizing phenotypic profiles, highlighting the general biological utility of each approach. PMID:27940887

  11. Task-focused modeling in automated agriculture

    NASA Astrophysics Data System (ADS)

    Vriesenga, Mark R.; Peleg, K.; Sklansky, Jack

    1993-01-01

    Machine vision systems analyze image data to carry out automation tasks. Our interest is in machine vision systems that rely on models to achieve their designed task. When the model is interrogated from an a priori menu of questions, the model need not be complete. Instead, the machine vision system can use a partial model that contains a large amount of information in regions of interest and less information elsewhere. We propose an adaptive modeling scheme for machine vision, called task-focused modeling, which constructs a model having just sufficient detail to carry out the specified task. The model is detailed in regions of interest to the task and is less detailed elsewhere. This focusing effect saves time and reduces the computational effort expended by the machine vision system. We illustrate task-focused modeling by an example involving real-time micropropagation of plants in automated agriculture.

  12. Vision 20/20: Automation and advanced computing in clinical radiation oncology.

    PubMed

    Moore, Kevin L; Kagadis, George C; McNutt, Todd R; Moiseenko, Vitali; Mutic, Sasa

    2014-01-01

    This Vision 20/20 paper considers what computational advances are likely to be implemented in clinical radiation oncology in the coming years and how the adoption of these changes might alter the practice of radiotherapy. Four main areas of likely advancement are explored: cloud computing, aggregate data analyses, parallel computation, and automation. As these developments promise both new opportunities and new risks to clinicians and patients alike, the potential benefits are weighed against the hazards associated with each advance, with special considerations regarding patient safety under new computational platforms and methodologies. While the concerns of patient safety are legitimate, the authors contend that progress toward next-generation clinical informatics systems will bring about extremely valuable developments in quality improvement initiatives, clinical efficiency, outcomes analyses, data sharing, and adaptive radiotherapy.

  13. Design And Implementation Of Integrated Vision-Based Robotic Workcells

    NASA Astrophysics Data System (ADS)

    Chen, Michael J.

    1985-01-01

    Reports have been sparse on large-scale, intelligent integration of complete robotic systems for automating the microelectronics industry. This paper describes the application of state-of-the-art computer-vision technology for manufacturing of miniaturized electronic components. The concepts of FMS - Flexible Manufacturing Systems, work cells, and work stations and their control hierarchy are illustrated in this paper. Several computer-controlled work cells used in the production of thin-film magnetic heads are described. These cells use vision for in-process control of head-fixture alignment and real-time inspection of production parameters. The vision sensor and other optoelectronic sensors, coupled with transport mechanisms such as steppers, x-y-z tables, and robots, have created complete sensorimotor systems. These systems greatly increase the manufacturing throughput as well as the quality of the final product. This paper uses these automated work cells as examples to exemplify the underlying design philosophy and principles in the fabrication of vision-based robotic systems.

  14. Automated Grading of Rough Hardwood Lumber

    Treesearch

    Richard W. Conners; Tai-Hoon Cho; Philip A. Araman

    1989-01-01

    Any automatic hardwood grading system must have two components. The first of these is a computer vision system for locating and identifying defects on rough lumber. The second is a system for automatically grading boards based on the output of the computer vision system. This paper presents research results aimed at developing the first of these components. The...

  15. Machine vision for real time orbital operations

    NASA Technical Reports Server (NTRS)

    Vinz, Frank L.

    1988-01-01

    Machine vision for automation and robotic operation of Space Station era systems has the potential for increasing the efficiency of orbital servicing, repair, assembly and docking tasks. A machine vision research project is described in which a TV camera is used for inputing visual data to a computer so that image processing may be achieved for real time control of these orbital operations. A technique has resulted from this research which reduces computer memory requirements and greatly increases typical computational speed such that it has the potential for development into a real time orbital machine vision system. This technique is called AI BOSS (Analysis of Images by Box Scan and Syntax).

  16. A computer vision for animal ecology.

    PubMed

    Weinstein, Ben G

    2018-05-01

    A central goal of animal ecology is to observe species in the natural world. The cost and challenge of data collection often limit the breadth and scope of ecological study. Ecologists often use image capture to bolster data collection in time and space. However, the ability to process these images remains a bottleneck. Computer vision can greatly increase the efficiency, repeatability and accuracy of image review. Computer vision uses image features, such as colour, shape and texture to infer image content. I provide a brief primer on ecological computer vision to outline its goals, tools and applications to animal ecology. I reviewed 187 existing applications of computer vision and divided articles into ecological description, counting and identity tasks. I discuss recommendations for enhancing the collaboration between ecologists and computer scientists and highlight areas for future growth of automated image analysis. © 2017 The Author. Journal of Animal Ecology © 2017 British Ecological Society.

  17. Vision 20/20: Automation and advanced computing in clinical radiation oncology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, Kevin L., E-mail: kevinmoore@ucsd.edu; Moiseenko, Vitali; Kagadis, George C.

    This Vision 20/20 paper considers what computational advances are likely to be implemented in clinical radiation oncology in the coming years and how the adoption of these changes might alter the practice of radiotherapy. Four main areas of likely advancement are explored: cloud computing, aggregate data analyses, parallel computation, and automation. As these developments promise both new opportunities and new risks to clinicians and patients alike, the potential benefits are weighed against the hazards associated with each advance, with special considerations regarding patient safety under new computational platforms and methodologies. While the concerns of patient safety are legitimate, the authorsmore » contend that progress toward next-generation clinical informatics systems will bring about extremely valuable developments in quality improvement initiatives, clinical efficiency, outcomes analyses, data sharing, and adaptive radiotherapy.« less

  18. Vision 20/20: Automation and advanced computing in clinical radiation oncology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, Kevin L., E-mail: kevinmoore@ucsd.edu; Moiseenko, Vitali; Kagadis, George C.

    2014-01-15

    This Vision 20/20 paper considers what computational advances are likely to be implemented in clinical radiation oncology in the coming years and how the adoption of these changes might alter the practice of radiotherapy. Four main areas of likely advancement are explored: cloud computing, aggregate data analyses, parallel computation, and automation. As these developments promise both new opportunities and new risks to clinicians and patients alike, the potential benefits are weighed against the hazards associated with each advance, with special considerations regarding patient safety under new computational platforms and methodologies. While the concerns of patient safety are legitimate, the authorsmore » contend that progress toward next-generation clinical informatics systems will bring about extremely valuable developments in quality improvement initiatives, clinical efficiency, outcomes analyses, data sharing, and adaptive radiotherapy.« less

  19. Marking parts to aid robot vision

    NASA Technical Reports Server (NTRS)

    Bales, J. W.; Barker, L. K.

    1981-01-01

    The premarking of parts for subsequent identification by a robot vision system appears to be beneficial as an aid in the automation of certain tasks such as construction in space. A simple, color coded marking system is presented which allows a computer vision system to locate an object, calculate its orientation, and determine its identity. Such a system has the potential to operate accurately, and because the computer shape analysis problem has been simplified, it has the ability to operate in real time.

  20. Automated Measurement of Facial Expression in Infant-Mother Interaction: A Pilot Study

    ERIC Educational Resources Information Center

    Messinger, Daniel S.; Mahoor, Mohammad H.; Chow, Sy-Miin; Cohn, Jeffrey F.

    2009-01-01

    Automated facial measurement using computer vision has the potential to objectively document continuous changes in behavior. To examine emotional expression and communication, we used automated measurements to quantify smile strength, eye constriction, and mouth opening in two 6-month-old infant-mother dyads who each engaged in a face-to-face…

  1. Computer vision applications for coronagraphic optical alignment and image processing.

    PubMed

    Savransky, Dmitry; Thomas, Sandrine J; Poyneer, Lisa A; Macintosh, Bruce A

    2013-05-10

    Modern coronagraphic systems require very precise alignment between optical components and can benefit greatly from automated image processing. We discuss three techniques commonly employed in the fields of computer vision and image analysis as applied to the Gemini Planet Imager, a new facility instrument for the Gemini South Observatory. We describe how feature extraction and clustering methods can be used to aid in automated system alignment tasks, and also present a search algorithm for finding regular features in science images used for calibration and data processing. Along with discussions of each technique, we present our specific implementation and show results of each one in operation.

  2. Artificial intelligence, expert systems, computer vision, and natural language processing

    NASA Technical Reports Server (NTRS)

    Gevarter, W. B.

    1984-01-01

    An overview of artificial intelligence (AI), its core ingredients, and its applications is presented. The knowledge representation, logic, problem solving approaches, languages, and computers pertaining to AI are examined, and the state of the art in AI is reviewed. The use of AI in expert systems, computer vision, natural language processing, speech recognition and understanding, speech synthesis, problem solving, and planning is examined. Basic AI topics, including automation, search-oriented problem solving, knowledge representation, and computational logic, are discussed.

  3. An automated miniaturized Haploscope for testing binocular visual function

    NASA Technical Reports Server (NTRS)

    Decker, T. A.; Williams, R. E.; Kuether, C. L.; Wyman-Cornsweet, D.

    1976-01-01

    A computer-controlled binocular vision testing device has been developed as one part of a system designed for NASA to test the vision of astronauts during spaceflight. The device, called the Mark III Haploscope, utilizes semi-automated psychophysical test procedures to measure visual acuity, stereopsis, phorias, fixation disparity and accommodation/convergence relationships. All tests are self-administered, yield quantitative data and may be used repeatedly without subject memorization. Future applications of this programmable, compact device include its use as a clinical instrument to perform routine eye examinations or vision screening, and as a research tool to examine the effects of environment or work-cycle upon visual function.

  4. Computer vision-based sorting of Atlantic salmon (Salmo salar) fillets according to their color level.

    PubMed

    Misimi, E; Mathiassen, J R; Erikson, U

    2007-01-01

    Computer vision method was used to evaluate the color of Atlantic salmon (Salmo salar) fillets. Computer vision-based sorting of fillets according to their color was studied on 2 separate groups of salmon fillets. The images of fillets were captured using a digital camera of high resolution. Images of salmon fillets were then segmented in the regions of interest and analyzed in red, green, and blue (RGB) and CIE Lightness, redness, and yellowness (Lab) color spaces, and classified according to the Roche color card industrial standard. Comparisons of fillet color between visual evaluations were made by a panel of human inspectors, according to the Roche SalmoFan lineal standard, and the color scores generated from computer vision algorithm showed that there were no significant differences between the methods. Overall, computer vision can be used as a powerful tool to sort fillets by color in a fast and nondestructive manner. The low cost of implementing computer vision solutions creates the potential to replace manual labor in fish processing plants with automation.

  5. Computer-controlled attenuator.

    PubMed

    Mitov, D; Grozev, Z

    1991-01-01

    Various possibilities for applying electronic computer-controlled attenuators for the automation of physiological experiments are considered. A detailed description is given of the design of a 4-channel computer-controlled attenuator, in two of the channels of which the output signal can change by a linear step, in the other two channels--by a logarithmic step. This, together with the existence of additional programmable timers, allows to automate a wide range of studies in different spheres of physiology and psychophysics, including vision and hearing.

  6. Automated detection and classification of dice

    NASA Astrophysics Data System (ADS)

    Correia, Bento A. B.; Silva, Jeronimo A.; Carvalho, Fernando D.; Guilherme, Rui; Rodrigues, Fernando C.; de Silva Ferreira, Antonio M.

    1995-03-01

    This paper describes a typical machine vision system in an unusual application, the automated visual inspection of a Casino's playing tables. The SORTE computer vision system was developed at INETI under a contract with the Portuguese Gaming Inspection Authorities IGJ. It aims to automate the tasks of detection and classification of the dice's scores on the playing tables of the game `Banca Francesa' (which means French Banking) in Casinos. The system is based on the on-line analysis of the images captured by a monochrome CCD camera placed over the playing tables, in order to extract relevant information concerning the score indicated by the dice. Image processing algorithms for real time automatic throwing detection and dice classification were developed and implemented.

  7. Sensor Control of Robot Arc Welding

    NASA Technical Reports Server (NTRS)

    Sias, F. R., Jr.

    1983-01-01

    The potential for using computer vision as sensory feedback for robot gas-tungsten arc welding is investigated. The basic parameters that must be controlled while directing the movement of an arc welding torch are defined. The actions of a human welder are examined to aid in determining the sensory information that would permit a robot to make reproducible high strength welds. Special constraints imposed by both robot hardware and software are considered. Several sensory modalities that would potentially improve weld quality are examined. Special emphasis is directed to the use of computer vision for controlling gas-tungsten arc welding. Vendors of available automated seam tracking arc welding systems and of computer vision systems are surveyed. An assessment is made of the state of the art and the problems that must be solved in order to apply computer vision to robot controlled arc welding on the Space Shuttle Main Engine.

  8. Optomechatronic System For Automated Intra Cytoplasmic Sperm Injection

    NASA Astrophysics Data System (ADS)

    Shulev, Assen; Tiankov, Tihomir; Ignatova, Detelina; Kostadinov, Kostadin; Roussev, Ilia; Trifonov, Dimitar; Penchev, Valentin

    2015-12-01

    This paper presents a complex optomechatronic system for In-Vitro Fertilization (IVF), offering almost complete automation of the Intra Cytoplasmic Sperm Injection (ICSI) procedure. The compound parts and sub-systems, as well as some of the computer vision algorithms, are described below. System capabilities for ICSI have been demonstrated on infertile oocyte cells.

  9. Future Automated Rough Mills Hinge on Vision Systems

    Treesearch

    Philip A. Araman

    1996-01-01

    The backbone behind major changes to present and future rough mills in dimension, furniture, cabinet or millwork facilities will be computer vision systems. Because of the wide variety of products and the quality of parts produced, the scanning systems and rough mills will vary greatly. The scanners will vary in type. For many complicated applications, multiple scanner...

  10. Object extraction in photogrammetric computer vision

    NASA Astrophysics Data System (ADS)

    Mayer, Helmut

    This paper discusses state and promising directions of automated object extraction in photogrammetric computer vision considering also practical aspects arising for digital photogrammetric workstations (DPW). A review of the state of the art shows that there are only few practically successful systems on the market. Therefore, important issues for a practical success of automated object extraction are identified. A sound and most important powerful theoretical background is the basis. Here, we particularly point to statistical modeling. Testing makes clear which of the approaches are suited best and how useful they are for praxis. A key for commercial success of a practical system is efficient user interaction. As the means for data acquisition are changing, new promising application areas such as extremely detailed three-dimensional (3D) urban models for virtual television or mission rehearsal evolve.

  11. Automated Analysis of Composition and Style of Photographs and Paintings

    ERIC Educational Resources Information Center

    Yao, Lei

    2013-01-01

    Computational aesthetics is a newly emerging cross-disciplinary field with its core situated in traditional research areas such as image processing and computer vision. Using a computer to interpret aesthetic terms for images is very challenging. In this dissertation, I focus on solving specific problems about analyzing the composition and style…

  12. Automated hardwood lumber grading utilizing a multiple sensor machine vision technology

    Treesearch

    D. Earl Kline; Chris Surak; Philip A. Araman

    2003-01-01

    Over the last 10 years, scientists at the Thomas M. Brooks Forest Products Center, the Bradley Department of Electrical and Computer Engineering, and the USDA Forest Service have been working on lumber scanning systems that can accurately locate and identify defects in hardwood lumber. Current R&D efforts are targeted toward developing automated lumber grading...

  13. Forensic Odontology: Automatic Identification of Persons Comparing Antemortem and Postmortem Panoramic Radiographs Using Computer Vision.

    PubMed

    Heinrich, Andreas; Güttler, Felix; Wendt, Sebastian; Schenkl, Sebastian; Hubig, Michael; Wagner, Rebecca; Mall, Gita; Teichgräber, Ulf

    2018-06-18

     In forensic odontology the comparison between antemortem and postmortem panoramic radiographs (PRs) is a reliable method for person identification. The purpose of this study was to improve and automate identification of unknown people by comparison between antemortem and postmortem PR using computer vision.  The study includes 43 467 PRs from 24 545 patients (46 % females/54 % males). All PRs were filtered and evaluated with Matlab R2014b including the toolboxes image processing and computer vision system. The matching process used the SURF feature to find the corresponding points between two PRs (unknown person and database entry) out of the whole database.  From 40 randomly selected persons, 34 persons (85 %) could be reliably identified by corresponding PR matching points between an already existing scan in the database and the most recent PR. The systematic matching yielded a maximum of 259 points for a successful identification between two different PRs of the same person and a maximum of 12 corresponding matching points for other non-identical persons in the database. Hence 12 matching points are the threshold for reliable assignment.  Operating with an automatic PR system and computer vision could be a successful and reliable tool for identification purposes. The applied method distinguishes itself by virtue of its fast and reliable identification of persons by PR. This Identification method is suitable even if dental characteristics were removed or added in the past. The system seems to be robust for large amounts of data.   · Computer vision allows an automated antemortem and postmortem comparison of panoramic radiographs (PRs) for person identification.. · The present method is able to find identical matching partners among huge datasets (big data) in a short computing time.. · The identification method is suitable even if dental characteristics were removed or added.. · Heinrich A, Güttler F, Wendt S et al. Forensic Odontology: Automatic Identification of Persons Comparing Antemortem and Postmortem Panoramic Radiographs Using Computer Vision. Fortschr Röntgenstr 2018; DOI: 10.1055/a-0632-4744. © Georg Thieme Verlag KG Stuttgart · New York.

  14. Automated detection and enumeration of marine wildlife using unmanned aircraft systems (UAS) and thermal imagery

    PubMed Central

    Seymour, A. C.; Dale, J.; Hammill, M.; Halpin, P. N.; Johnston, D. W.

    2017-01-01

    Estimating animal populations is critical for wildlife management. Aerial surveys are used for generating population estimates, but can be hampered by cost, logistical complexity, and human risk. Additionally, human counts of organisms in aerial imagery can be tedious and subjective. Automated approaches show promise, but can be constrained by long setup times and difficulty discriminating animals in aggregations. We combine unmanned aircraft systems (UAS), thermal imagery and computer vision to improve traditional wildlife survey methods. During spring 2015, we flew fixed-wing UAS equipped with thermal sensors, imaging two grey seal (Halichoerus grypus) breeding colonies in eastern Canada. Human analysts counted and classified individual seals in imagery manually. Concurrently, an automated classification and detection algorithm discriminated seals based upon temperature, size, and shape of thermal signatures. Automated counts were within 95–98% of human estimates; at Saddle Island, the model estimated 894 seals compared to analyst counts of 913, and at Hay Island estimated 2188 seals compared to analysts’ 2311. The algorithm improves upon shortcomings of computer vision by effectively recognizing seals in aggregations while keeping model setup time minimal. Our study illustrates how UAS, thermal imagery, and automated detection can be combined to efficiently collect population data critical to wildlife management. PMID:28338047

  15. Automated detection and enumeration of marine wildlife using unmanned aircraft systems (UAS) and thermal imagery

    NASA Astrophysics Data System (ADS)

    Seymour, A. C.; Dale, J.; Hammill, M.; Halpin, P. N.; Johnston, D. W.

    2017-03-01

    Estimating animal populations is critical for wildlife management. Aerial surveys are used for generating population estimates, but can be hampered by cost, logistical complexity, and human risk. Additionally, human counts of organisms in aerial imagery can be tedious and subjective. Automated approaches show promise, but can be constrained by long setup times and difficulty discriminating animals in aggregations. We combine unmanned aircraft systems (UAS), thermal imagery and computer vision to improve traditional wildlife survey methods. During spring 2015, we flew fixed-wing UAS equipped with thermal sensors, imaging two grey seal (Halichoerus grypus) breeding colonies in eastern Canada. Human analysts counted and classified individual seals in imagery manually. Concurrently, an automated classification and detection algorithm discriminated seals based upon temperature, size, and shape of thermal signatures. Automated counts were within 95-98% of human estimates; at Saddle Island, the model estimated 894 seals compared to analyst counts of 913, and at Hay Island estimated 2188 seals compared to analysts’ 2311. The algorithm improves upon shortcomings of computer vision by effectively recognizing seals in aggregations while keeping model setup time minimal. Our study illustrates how UAS, thermal imagery, and automated detection can be combined to efficiently collect population data critical to wildlife management.

  16. Automated Ecological Assessment of Physical Activity: Advancing Direct Observation.

    PubMed

    Carlson, Jordan A; Liu, Bo; Sallis, James F; Kerr, Jacqueline; Hipp, J Aaron; Staggs, Vincent S; Papa, Amy; Dean, Kelsey; Vasconcelos, Nuno M

    2017-12-01

    Technological advances provide opportunities for automating direct observations of physical activity, which allow for continuous monitoring and feedback. This pilot study evaluated the initial validity of computer vision algorithms for ecological assessment of physical activity. The sample comprised 6630 seconds per camera (three cameras in total) of video capturing up to nine participants engaged in sitting, standing, walking, and jogging in an open outdoor space while wearing accelerometers. Computer vision algorithms were developed to assess the number and proportion of people in sedentary, light, moderate, and vigorous activity, and group-based metabolic equivalents of tasks (MET)-minutes. Means and standard deviations (SD) of bias/difference values, and intraclass correlation coefficients (ICC) assessed the criterion validity compared to accelerometry separately for each camera. The number and proportion of participants sedentary and in moderate-to-vigorous physical activity (MVPA) had small biases (within 20% of the criterion mean) and the ICCs were excellent (0.82-0.98). Total MET-minutes were slightly underestimated by 9.3-17.1% and the ICCs were good (0.68-0.79). The standard deviations of the bias estimates were moderate-to-large relative to the means. The computer vision algorithms appeared to have acceptable sample-level validity (i.e., across a sample of time intervals) and are promising for automated ecological assessment of activity in open outdoor settings, but further development and testing is needed before such tools can be used in a diverse range of settings.

  17. Automated Ecological Assessment of Physical Activity: Advancing Direct Observation

    PubMed Central

    Carlson, Jordan A.; Liu, Bo; Sallis, James F.; Kerr, Jacqueline; Papa, Amy; Dean, Kelsey; Vasconcelos, Nuno M.

    2017-01-01

    Technological advances provide opportunities for automating direct observations of physical activity, which allow for continuous monitoring and feedback. This pilot study evaluated the initial validity of computer vision algorithms for ecological assessment of physical activity. The sample comprised 6630 seconds per camera (three cameras in total) of video capturing up to nine participants engaged in sitting, standing, walking, and jogging in an open outdoor space while wearing accelerometers. Computer vision algorithms were developed to assess the number and proportion of people in sedentary, light, moderate, and vigorous activity, and group-based metabolic equivalents of tasks (MET)-minutes. Means and standard deviations (SD) of bias/difference values, and intraclass correlation coefficients (ICC) assessed the criterion validity compared to accelerometry separately for each camera. The number and proportion of participants sedentary and in moderate-to-vigorous physical activity (MVPA) had small biases (within 20% of the criterion mean) and the ICCs were excellent (0.82–0.98). Total MET-minutes were slightly underestimated by 9.3–17.1% and the ICCs were good (0.68–0.79). The standard deviations of the bias estimates were moderate-to-large relative to the means. The computer vision algorithms appeared to have acceptable sample-level validity (i.e., across a sample of time intervals) and are promising for automated ecological assessment of activity in open outdoor settings, but further development and testing is needed before such tools can be used in a diverse range of settings. PMID:29194358

  18. Economics of cutting hardwood dimension parts with an automated system

    Treesearch

    Henry A. Huber; Steve Ruddell; Kalinath Mukherjee; Charles W. McMillin

    1989-01-01

    A financial analysis using discounted cash-flow decision methods was completed to determine the economic feasibility of replacing a conventional roughmill crosscut and rip operation with a proposed automated computer vision and laser cutting system. Red oak and soft maple lumber were cut at production levels of 30 thousand board feet (MBF)/day and 5 MBF/day to produce...

  19. Knowledge-based machine vision systems for space station automation

    NASA Technical Reports Server (NTRS)

    Ranganath, Heggere S.; Chipman, Laure J.

    1989-01-01

    Computer vision techniques which have the potential for use on the space station and related applications are assessed. A knowledge-based vision system (expert vision system) and the development of a demonstration system for it are described. This system implements some of the capabilities that would be necessary in a machine vision system for the robot arm of the laboratory module in the space station. A Perceptics 9200e image processor, on a host VAXstation, was used to develop the demonstration system. In order to use realistic test images, photographs of actual space shuttle simulator panels were used. The system's capabilities of scene identification and scene matching are discussed.

  20. Development of a Computer Vision Technology for the Forest Products Manufacturing Industry

    Treesearch

    D. Earl Kline; Richard Conners; Philip A. Araman

    1992-01-01

    The goal of this research is to create an automated processing/grading system for hardwood lumber that will be of use to the forest products industry. The objective of creating a full scale machine vision prototype for inspecting hardwood lumber will become a reality in calendar year 1992. Space for the full scale prototype has been created at the Brooks Forest...

  1. Biological Basis For Computer Vision: Some Perspectives

    NASA Astrophysics Data System (ADS)

    Gupta, Madan M.

    1990-03-01

    Using biology as a basis for the development of sensors, devices and computer vision systems is a challenge to systems and vision scientists. It is also a field of promising research for engineering applications. Biological sensory systems, such as vision, touch and hearing, sense different physical phenomena from our environment, yet they possess some common mathematical functions. These mathematical functions are cast into the neural layers which are distributed throughout our sensory regions, sensory information transmission channels and in the cortex, the centre of perception. In this paper, we are concerned with the study of the biological vision system and the emulation of some of its mathematical functions, both retinal and visual cortex, for the development of a robust computer vision system. This field of research is not only intriguing, but offers a great challenge to systems scientists in the development of functional algorithms. These functional algorithms can be generalized for further studies in such fields as signal processing, control systems and image processing. Our studies are heavily dependent on the the use of fuzzy - neural layers and generalized receptive fields. Building blocks of such neural layers and receptive fields may lead to the design of better sensors and better computer vision systems. It is hoped that these studies will lead to the development of better artificial vision systems with various applications to vision prosthesis for the blind, robotic vision, medical imaging, medical sensors, industrial automation, remote sensing, space stations and ocean exploration.

  2. A review of automated image understanding within 3D baggage computed tomography security screening.

    PubMed

    Mouton, Andre; Breckon, Toby P

    2015-01-01

    Baggage inspection is the principal safeguard against the transportation of prohibited and potentially dangerous materials at airport security checkpoints. Although traditionally performed by 2D X-ray based scanning, increasingly stringent security regulations have led to a growing demand for more advanced imaging technologies. The role of X-ray Computed Tomography is thus rapidly expanding beyond the traditional materials-based detection of explosives. The development of computer vision and image processing techniques for the automated understanding of 3D baggage-CT imagery is however, complicated by poor image resolutions, image clutter and high levels of noise and artefacts. We discuss the recent and most pertinent advancements and identify topics for future research within the challenging domain of automated image understanding for baggage security screening CT.

  3. Final Report on Video Log Data Mining Project

    DOT National Transportation Integrated Search

    2012-06-01

    This report describes the development of an automated computer vision system that identities and inventories road signs : from imagery acquired from the Kansas Department of Transportations road profiling system that takes images every 26.4 : feet...

  4. Third Conference on Artificial Intelligence for Space Applications, part 2

    NASA Technical Reports Server (NTRS)

    Denton, Judith S. (Compiler); Freeman, Michael S. (Compiler); Vereen, Mary (Compiler)

    1988-01-01

    Topics relative to the application of artificial intelligence to space operations are discussed. New technologies for space station automation, design data capture, computer vision, neural nets, automatic programming, and real time applications are discussed.

  5. Development of Moire machine vision

    NASA Technical Reports Server (NTRS)

    Harding, Kevin G.

    1987-01-01

    Three dimensional perception is essential to the development of versatile robotics systems in order to handle complex manufacturing tasks in future factories and in providing high accuracy measurements needed in flexible manufacturing and quality control. A program is described which will develop the potential of Moire techniques to provide this capability in vision systems and automated measurements, and demonstrate artificial intelligence (AI) techniques to take advantage of the strengths of Moire sensing. Moire techniques provide a means of optically manipulating the complex visual data in a three dimensional scene into a form which can be easily and quickly analyzed by computers. This type of optical data manipulation provides high productivity through integrated automation, producing a high quality product while reducing computer and mechanical manipulation requirements and thereby the cost and time of production. This nondestructive evaluation is developed to be able to make full field range measurement and three dimensional scene analysis.

  6. Development of Moire machine vision

    NASA Astrophysics Data System (ADS)

    Harding, Kevin G.

    1987-10-01

    Three dimensional perception is essential to the development of versatile robotics systems in order to handle complex manufacturing tasks in future factories and in providing high accuracy measurements needed in flexible manufacturing and quality control. A program is described which will develop the potential of Moire techniques to provide this capability in vision systems and automated measurements, and demonstrate artificial intelligence (AI) techniques to take advantage of the strengths of Moire sensing. Moire techniques provide a means of optically manipulating the complex visual data in a three dimensional scene into a form which can be easily and quickly analyzed by computers. This type of optical data manipulation provides high productivity through integrated automation, producing a high quality product while reducing computer and mechanical manipulation requirements and thereby the cost and time of production. This nondestructive evaluation is developed to be able to make full field range measurement and three dimensional scene analysis.

  7. The software for automatic creation of the formal grammars used by speech recognition, computer vision, editable text conversion systems, and some new functions

    NASA Astrophysics Data System (ADS)

    Kardava, Irakli; Tadyszak, Krzysztof; Gulua, Nana; Jurga, Stefan

    2017-02-01

    For more flexibility of environmental perception by artificial intelligence it is needed to exist the supporting software modules, which will be able to automate the creation of specific language syntax and to make a further analysis for relevant decisions based on semantic functions. According of our proposed approach, of which implementation it is possible to create the couples of formal rules of given sentences (in case of natural languages) or statements (in case of special languages) by helping of computer vision, speech recognition or editable text conversion system for further automatic improvement. In other words, we have developed an approach, by which it can be achieved to significantly improve the training process automation of artificial intelligence, which as a result will give us a higher level of self-developing skills independently from us (from users). At the base of our approach we have developed a software demo version, which includes the algorithm and software code for the entire above mentioned component's implementation (computer vision, speech recognition and editable text conversion system). The program has the ability to work in a multi - stream mode and simultaneously create a syntax based on receiving information from several sources.

  8. Development of embedded real-time and high-speed vision platform

    NASA Astrophysics Data System (ADS)

    Ouyang, Zhenxing; Dong, Yimin; Yang, Hua

    2015-12-01

    Currently, high-speed vision platforms are widely used in many applications, such as robotics and automation industry. However, a personal computer (PC) whose over-large size is not suitable and applicable in compact systems is an indispensable component for human-computer interaction in traditional high-speed vision platforms. Therefore, this paper develops an embedded real-time and high-speed vision platform, ER-HVP Vision which is able to work completely out of PC. In this new platform, an embedded CPU-based board is designed as substitution for PC and a DSP and FPGA board is developed for implementing image parallel algorithms in FPGA and image sequential algorithms in DSP. Hence, the capability of ER-HVP Vision with size of 320mm x 250mm x 87mm can be presented in more compact condition. Experimental results are also given to indicate that the real-time detection and counting of the moving target at a frame rate of 200 fps at 512 x 512 pixels under the operation of this newly developed vision platform are feasible.

  9. Gas flow parameters in laser cutting of wood- nozzle design

    Treesearch

    Kali Mukherjee; Tom Grendzwell; Parwaiz A.A. Khan; Charles McMillin

    1990-01-01

    The Automated Lumber Processing System (ALPS) is an ongoing team research effort to optimize the yield of parts in a furniture rough mill. The process is designed to couple aspects of computer vision, computer optimization of yield, and laser cutting. This research is focused on optimizing laser wood cutting. Laser machining of lumber has the advantage over...

  10. Development of a Wireless Computer Vision Instrument to Detect Biotic Stress in Wheat

    PubMed Central

    Casanova, Joaquin J.; O'Shaughnessy, Susan A.; Evett, Steven R.; Rush, Charles M.

    2014-01-01

    Knowledge of crop abiotic and biotic stress is important for optimal irrigation management. While spectral reflectance and infrared thermometry provide a means to quantify crop stress remotely, these measurements can be cumbersome. Computer vision offers an inexpensive way to remotely detect crop stress independent of vegetation cover. This paper presents a technique using computer vision to detect disease stress in wheat. Digital images of differentially stressed wheat were segmented into soil and vegetation pixels using expectation maximization (EM). In the first season, the algorithm to segment vegetation from soil and distinguish between healthy and stressed wheat was developed and tested using digital images taken in the field and later processed on a desktop computer. In the second season, a wireless camera with near real-time computer vision capabilities was tested in conjunction with the conventional camera and desktop computer. For wheat irrigated at different levels and inoculated with wheat streak mosaic virus (WSMV), vegetation hue determined by the EM algorithm showed significant effects from irrigation level and infection. Unstressed wheat had a higher hue (118.32) than stressed wheat (111.34). In the second season, the hue and cover measured by the wireless computer vision sensor showed significant effects from infection (p = 0.0014), as did the conventional camera (p < 0.0001). Vegetation hue obtained through a wireless computer vision system in this study is a viable option for determining biotic crop stress in irrigation scheduling. Such a low-cost system could be suitable for use in the field in automated irrigation scheduling applications. PMID:25251410

  11. Computer vision challenges and technologies for agile manufacturing

    NASA Astrophysics Data System (ADS)

    Molley, Perry A.

    1996-02-01

    Sandia National Laboratories, a Department of Energy laboratory, is responsible for maintaining the safety, security, reliability, and availability of the nuclear weapons stockpile for the United States. Because of the changing national and global political climates and inevitable budget cuts, Sandia is changing the methods and processes it has traditionally used in the product realization cycle for weapon components. Because of the increasing age of the nuclear stockpile, it is certain that the reliability of these weapons will degrade with time unless eventual action is taken to repair, requalify, or renew them. Furthermore, due to the downsizing of the DOE weapons production sites and loss of technical personnel, the new product realization process is being focused on developing and deploying advanced automation technologies in order to maintain the capability for producing new components. The goal of Sandia's technology development program is to create a product realization environment that is cost effective, has improved quality and reduced cycle time for small lot sizes. The new environment will rely less on the expertise of humans and more on intelligent systems and automation to perform the production processes. The systems will be robust in order to provide maximum flexibility and responsiveness for rapidly changing component or product mixes. An integrated enterprise will allow ready access to and use of information for effective and efficient product and process design. Concurrent engineering methods will allow a speedup of the product realization cycle, reduce costs, and dramatically lessen the dependency on creating and testing physical prototypes. Virtual manufacturing will allow production processes to be designed, integrated, and programed off-line before a piece of hardware ever moves. The overriding goal is to be able to build a large variety of new weapons parts on short notice. Many of these technologies that are being developed are also applicable to commercial production processes and applications. Computer vision will play a critical role in the new agile production environment for automation of processes such as inspection, assembly, welding, material dispensing and other process control tasks. Although there are many academic and commercial solutions that have been developed, none have had widespread adoption considering the huge potential number of applications that could benefit from this technology. The reason for this slow adoption is that the advantages of computer vision for automation can be a double-edged sword. The benefits can be lost if the vision system requires an inordinate amount of time for reprogramming by a skilled operator to account for different parts, changes in lighting conditions, background clutter, changes in optics, etc. Commercially available solutions typically require an operator to manually program the vision system with features used for the recognition. In a recent survey, we asked a number of commercial manufacturers and machine vision companies the question, 'What prevents machine vision systems from being more useful in factories?' The number one (and unanimous) response was that vision systems require too much skill to set up and program to be cost effective.

  12. [Stimulation parameters for automatic examination of color vision].

    PubMed

    Sommerhalder, J; Pelizzone, M; Roth, A

    1997-05-01

    We have developed a polyvalent computer controlled instrument, which uses the "two equation method" (Rayleigh and Moreland equations) to measure human colour vision. This "colorimeter" (or anomaloscope) was used to determine the influence of some important stimulation parameters. The influence of stimulus exposure time, observation field size, absolute stimulus luminance, saturation and luminance mismatches between mixture and reference stimuli were measured on our computer controlled colorimeter. All subjects were normal observers. 1) An exposure time of 2s was found to be optimal for clinical work. 2) The Moreland equation on a 7 degrees observation field yields results in which population variability is comparable to a Rayleigh equation on a 2 degrees field. 3) A retinal illuminance between 40 and 1000 trolands can be used for automated Moreland matches. 4) The saturation of the reference field for the Moreland match can be preset to a fixed value. 5) It is important to vary automatically the radiance of the reference field to provide an approximate luminance match as the ratio of primaries in the mixture field is changed. These measurements allows us to determine optimal conditions for automated colour vision examinations and to make recommendations for an international standard.

  13. Automated spot defect characterization in a field portable night vision goggle test set

    NASA Astrophysics Data System (ADS)

    Scopatz, Stephen; Ozten, Metehan; Aubry, Gilles; Arquetoux, Guillaume

    2018-05-01

    This paper discusses a new capability developed for and results from a field portable test set for Gen 2 and Gen 3 Image Intensifier (I2) tube-based Night Vision Goggles (NVG). A previous paper described the test set and the automated and semi-automated tests supported for NVGs including a Knife Edge MTF test to replace the operator's interpretation of the USAF 1951 resolution chart. The major improvement and innovation detailed in this paper is the use of image analysis algorithms to automate the characterization of spot defects of I² tubes with the same test set hardware previously presented. The original and still common Spot Defect Test requires the operator to look through the NVGs at target of concentric rings; compare the size of the defects to a chart and manually enter the results into a table based on the size and location of each defect; this is tedious and subjective. The prior semi-automated improvement captures and displays an image of the defects and the rings; allowing the operator determine the defects with less eyestrain; while electronically storing the image and the resulting table. The advanced Automated Spot Defect Test utilizes machine vision algorithms to determine the size and location of the defects, generates the result table automatically and then records the image and the results in a computer-generated report easily usable for verification. This is inherently a more repeatable process that ensures consistent spot detection independent of the operator. Results of across several NVGs will be presented.

  14. Computer Vision Malaria Diagnostic Systems-Progress and Prospects.

    PubMed

    Pollak, Joseph Joel; Houri-Yafin, Arnon; Salpeter, Seth J

    2017-01-01

    Accurate malaria diagnosis is critical to prevent malaria fatalities, curb overuse of antimalarial drugs, and promote appropriate management of other causes of fever. While several diagnostic tests exist, the need for a rapid and highly accurate malaria assay remains. Microscopy and rapid diagnostic tests are the main diagnostic modalities available, yet they can demonstrate poor performance and accuracy. Automated microscopy platforms have the potential to significantly improve and standardize malaria diagnosis. Based on image recognition and machine learning algorithms, these systems maintain the benefits of light microscopy and provide improvements such as quicker scanning time, greater scanning area, and increased consistency brought by automation. While these applications have been in development for over a decade, recently several commercial platforms have emerged. In this review, we discuss the most advanced computer vision malaria diagnostic technologies and investigate several of their features which are central to field use. Additionally, we discuss the technological and policy barriers to implementing these technologies in low-resource settings world-wide.

  15. Virtual Vision

    NASA Astrophysics Data System (ADS)

    Terzopoulos, Demetri; Qureshi, Faisal Z.

    Computer vision and sensor networks researchers are increasingly motivated to investigate complex multi-camera sensing and control issues that arise in the automatic visual surveillance of extensive, highly populated public spaces such as airports and train stations. However, they often encounter serious impediments to deploying and experimenting with large-scale physical camera networks in such real-world environments. We propose an alternative approach called "Virtual Vision", which facilitates this type of research through the virtual reality simulation of populated urban spaces, camera sensor networks, and computer vision on commodity computers. We demonstrate the usefulness of our approach by developing two highly automated surveillance systems comprising passive and active pan/tilt/zoom cameras that are deployed in a virtual train station environment populated by autonomous, lifelike virtual pedestrians. The easily reconfigurable virtual cameras distributed in this environment generate synthetic video feeds that emulate those acquired by real surveillance cameras monitoring public spaces. The novel multi-camera control strategies that we describe enable the cameras to collaborate in persistently observing pedestrians of interest and in acquiring close-up videos of pedestrians in designated areas.

  16. The "Biologically-Inspired Computing" Column

    NASA Technical Reports Server (NTRS)

    Hinchey, Mike

    2007-01-01

    Self-managing systems, whether viewed from the perspective of Autonomic Computing, or from that of another initiative, offers a holistic vision for the development and evolution of biologically-inspired computer-based systems. It aims to bring new levels of automation and dependability to systems, while simultaneously hiding their complexity and reducing costs. A case can certainly be made that all computer-based systems should exhibit autonomic properties [6], and we envisage greater interest in, and uptake of, autonomic principles in future system development.

  17. Modelling and representation issues in automated feature extraction from aerial and satellite images

    NASA Astrophysics Data System (ADS)

    Sowmya, Arcot; Trinder, John

    New digital systems for the processing of photogrammetric and remote sensing images have led to new approaches to information extraction for mapping and Geographic Information System (GIS) applications, with the expectation that data can become more readily available at a lower cost and with greater currency. Demands for mapping and GIS data are increasing as well for environmental assessment and monitoring. Hence, researchers from the fields of photogrammetry and remote sensing, as well as computer vision and artificial intelligence, are bringing together their particular skills for automating these tasks of information extraction. The paper will review some of the approaches used in knowledge representation and modelling for machine vision, and give examples of their applications in research for image understanding of aerial and satellite imagery.

  18. Apple flower detection using deep convolutional networks

    USDA-ARS?s Scientific Manuscript database

    In order to optimize fruit production, a portion of the flowers and fruitlets of apple trees must be removed early in the growing season. The proportion to be removed is determined by the bloom intensity, i.e., the number of flowers present in the orchard. Several automated computer vision systems...

  19. Integrated Imaging and Vision Techniques for Industrial Inspection: A Special Issue on Machine Vision and Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Zheng; Ukida, H.; Ramuhalli, Pradeep

    2010-06-05

    Imaging- and vision-based techniques play an important role in industrial inspection. The sophistication of the techniques assures high- quality performance of the manufacturing process through precise positioning, online monitoring, and real-time classification. Advanced systems incorporating multiple imaging and/or vision modalities provide robust solutions to complex situations and problems in industrial applications. A diverse range of industries, including aerospace, automotive, electronics, pharmaceutical, biomedical, semiconductor, and food/beverage, etc., have benefited from recent advances in multi-modal imaging, data fusion, and computer vision technologies. Many of the open problems in this context are in the general area of image analysis methodologies (preferably in anmore » automated fashion). This editorial article introduces a special issue of this journal highlighting recent advances and demonstrating the successful applications of integrated imaging and vision technologies in industrial inspection.« less

  20. Computer Vision Research and Its Applications to Automated Cartography

    DTIC Science & Technology

    1984-09-01

    reflecting from scene surfaces, and the film and digitization processes that result in the computer representation of the image. These models, when...alone. Specifically, intepretations that are in some sense "orthogonal" are preferred. A method for finding such interpretations for right-angle...saturated colors are not precisely representable and the colors recorded with different films or cameras may differ, but the tricomponent representation is t

  1. Computer vision-based automated peak picking applied to protein NMR spectra.

    PubMed

    Klukowski, Piotr; Walczak, Michal J; Gonczarek, Adam; Boudet, Julien; Wider, Gerhard

    2015-09-15

    A detailed analysis of multidimensional NMR spectra of macromolecules requires the identification of individual resonances (peaks). This task can be tedious and time-consuming and often requires support by experienced users. Automated peak picking algorithms were introduced more than 25 years ago, but there are still major deficiencies/flaws that often prevent complete and error free peak picking of biological macromolecule spectra. The major challenges of automated peak picking algorithms is both the distinction of artifacts from real peaks particularly from those with irregular shapes and also picking peaks in spectral regions with overlapping resonances which are very hard to resolve by existing computer algorithms. In both of these cases a visual inspection approach could be more effective than a 'blind' algorithm. We present a novel approach using computer vision (CV) methodology which could be better adapted to the problem of peak recognition. After suitable 'training' we successfully applied the CV algorithm to spectra of medium-sized soluble proteins up to molecular weights of 26 kDa and to a 130 kDa complex of a tetrameric membrane protein in detergent micelles. Our CV approach outperforms commonly used programs. With suitable training datasets the application of the presented method can be extended to automated peak picking in multidimensional spectra of nucleic acids or carbohydrates and adapted to solid-state NMR spectra. CV-Peak Picker is available upon request from the authors. gsw@mol.biol.ethz.ch; michal.walczak@mol.biol.ethz.ch; adam.gonczarek@pwr.edu.pl Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. Trends and developments in industrial machine vision: 2013

    NASA Astrophysics Data System (ADS)

    Niel, Kurt; Heinzl, Christoph

    2014-03-01

    When following current advancements and implementations in the field of machine vision there seems to be no borders for future developments: Calculating power constantly increases, and new ideas are spreading and previously challenging approaches are introduced in to mass market. Within the past decades these advances have had dramatic impacts on our lives. Consumer electronics, e.g. computers or telephones, which once occupied large volumes, now fit in the palm of a hand. To note just a few examples e.g. face recognition was adopted by the consumer market, 3D capturing became cheap, due to the huge community SW-coding got easier using sophisticated development platforms. However, still there is a remaining gap between consumer and industrial applications. While the first ones have to be entertaining, the second have to be reliable. Recent studies (e.g. VDMA [1], Germany) show a moderately increasing market for machine vision in industry. Asking industry regarding their needs the main challenges for industrial machine vision are simple usage and reliability for the process, quick support, full automation, self/easy adjustment at changing process parameters, "forget it in the line". Furthermore a big challenge is to support quality control: Nowadays the operator has to accurately define the tested features for checking the probes. There is an upcoming development also to let automated machine vision applications find out essential parameters in a more abstract level (top down). In this work we focus on three current and future topics for industrial machine vision: Metrology supporting automation, quality control (inline/atline/offline) as well as visualization and analysis of datasets with steadily growing sizes. Finally the general trend of the pixel orientated towards object orientated evaluation is addressed. We do not directly address the field of robotics taking advances from machine vision. This is actually a fast changing area which is worth an own contribution.

  3. Automated mosaicking of sub-canopy video incorporating ancillary data

    Treesearch

    E. Kee; N.E. Clark; A.L. Abbott

    2002-01-01

    This work investigates the process of mosaicking overlapping video frames of individual tree stems in sub-canopy scenes captured with a portable multisensor instrument. The robust commercial computer vision systems that are in use today typically rely on precisely controlled conditions. Inconsistent lighting as well as image distortion caused by varying interior and...

  4. Vision based techniques for rotorcraft low altitude flight

    NASA Technical Reports Server (NTRS)

    Sridhar, Banavar; Suorsa, Ray; Smith, Philip

    1991-01-01

    An overview of research in obstacle detection at NASA Ames Research Center is presented. The research applies techniques from computer vision to automation of rotorcraft navigation. The development of a methodology for detecting the range to obstacles based on the maximum utilization of passive sensors is emphasized. The development of a flight and image data base for verification of vision-based algorithms, and a passive ranging methodology tailored to the needs of helicopter flight are discussed. Preliminary results indicate that it is possible to obtain adequate range estimates except at regions close to the FOE. Closer to the FOE, the error in range increases since the magnitude of the disparity gets smaller, resulting in a low SNR.

  5. Operational Based Vision Assessment Automated Vision Test Collection User Guide

    DTIC Science & Technology

    2017-05-15

    repeatability to support correlation analysis. The AVT research grade tests also support interservice, international, industry, and academic partnerships...software, provides information concerning various menu options and operation of the test, and provides a brief description of each of the automated vision...2802, 6 Jun 2017. TABLE OF CONTENTS (concluded) Section Page 7.0 OBVA VISION TEST DESCRIPTIONS

  6. Novel computer vision analysis of nasal shape in children with unilateral cleft lip.

    PubMed

    Mercan, Ezgi; Morrison, Clinton S; Stuhaug, Erik; Shapiro, Linda G; Tse, Raymond W

    2018-01-01

    Optimization of treatment of the unilateral cleft lip nasal deformity (uCLND) is hampered by lack of objective means to assess initial severity and changes produced by treatment and growth. The purpose of this study was to develop automated 3D image analysis specific to the uCLND; assess the correlation of these measures to esthetic appraisal; measure changes that occur with treatment and differences amongst cleft types. Dorsum Deviation, Tip-Alar Volume Ratio, Alar-Cheek Definition, and Columellar Angle were assessed using computer-vision techniques. Subjects included infants before and after primary cleft lip repair (N = 50) and children aged 8-10 years with previous cleft lip (N = 50). Two expert surgeons ranked subjects according to esthetic nose appearance. Computer-based measurements strongly correlated with rankings of infants pre-repair (r = 0.8, 0.75, 0.41 and 0.54 for Dorsum Deviation, Tip-Alar Volume Ratio, Alar-Cheek Definition, and Columellar Angle, p < 0.01) while all measurements except Alar-Cheek Definition correlated moderately with rankings of older children post-repair (r ∼ 0.35, p < 0.01). Measurements were worse with greater severity of cleft type but improved following initial repair. Abnormal Dorsum Deviation and Columellar Angle persisted after surgery and were more severe with greater cleft type. Four fully-automated measures were developed that are clinically relevant, agree with expert evaluations and can be followed through initial surgery and in older children. Computer vision analysis techniques can quantify the nasal deformity at different stages, offering efficient and standardized tools for large studies and data-driven conclusions. Copyright © 2017 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  7. Development and testing of an automated computer tablet-based method for self-testing of high and low contrast near visual acuity in ophthalmic patients.

    PubMed

    Aslam, Tariq M; Parry, Neil R A; Murray, Ian J; Salleh, Mahani; Col, Caterina Dal; Mirza, Naznin; Czanner, Gabriela; Tahir, Humza J

    2016-05-01

    Many eye diseases require on-going assessment for optimal management, creating an ever-increasing burden on patients and hospitals that could potentially be reduced through home vision monitoring. However, there is limited evidence for the utility of current applications and devices for this. To address this, we present a new automated, computer tablet-based method for self-testing near visual acuity (VA) for both high and low contrast targets. We report on its reliability and agreement with gold standard measures. The Mobile Assessment of Vision by intERactIve Computer (MAVERIC) system consists of a calibrated computer tablet housed in a bespoke viewing chamber. Purpose-built software automatically elicits touch-screen responses from subjects to measure their near VA for either low or high contrast acuity. Near high contrast acuity was measured using both the MAVERIC system and a near Landolt C chart in one eye for 81 patients and low contrast acuity using the MAVERIC system and a 25 % contrast near EDTRS chart in one eye of a separate 95 patients. The MAVERIC near acuity was also retested after 20 min to evaluate repeatability. Repeatability of both high and low contrast MAVERIC acuity measures, and their agreement with the chart tests, was assessed using the Bland-Altman comparison method. One hundred and seventy-three patients (96 %) completed the self- testing MAVERIC system without formal assistance. The resulting MAVERIC vision demonstrated good repeatability and good agreement with the gold-standard near chart measures. This study demonstrates the potential utility of the MAVERIC system for patients with ophthalmic disease to self-test their high and low contrast VA. The technique has a high degree of reliability and agreement with gold standard chart based measurements.

  8. Automated Measurement of Visual Acuity in Pediatric Ophthalmic Patients Using Principles of Game Design and Tablet Computers.

    PubMed

    Aslam, Tariq M; Tahir, Humza J; Parry, Neil R A; Murray, Ian J; Kwak, Kun; Heyes, Richard; Salleh, Mahani M; Czanner, Gabriela; Ashworth, Jane

    2016-10-01

    To report on the utility of a computer tablet-based method for automated testing of visual acuity in children based on the principles of game design. We describe the testing procedure and present repeatability as well as agreement of the score with accepted visual acuity measures. Reliability and validity study. Setting: Manchester Royal Eye Hospital Pediatric Ophthalmology Outpatients Department. Total of 112 sequentially recruited patients. For each patient 1 eye was tested with the Mobile Assessment of Vision by intERactIve Computer for Children (MAVERIC-C) system, consisting of a software application running on a computer tablet, housed in a bespoke viewing chamber. The application elicited touch screen responses using a game design to encourage compliance and automatically acquire visual acuity scores of participating patients. Acuity was then assessed by an examiner with a standard chart-based near ETDRS acuity test before the MAVERIC-C assessment was repeated. Reliability of MAVERIC-C near visual acuity score and agreement of MAVERIC-C score with near ETDRS chart for visual acuity. Altogether, 106 children (95%) completed the MAVERIC-C system without assistance. The vision scores demonstrated satisfactory reliability, with test-retest VA scores having a mean difference of 0.001 (SD ±0.136) and limits of agreement of 2 SD (LOA) of ±0.267. Comparison with the near EDTRS chart showed agreement with a mean difference of -0.0879 (±0.106) with LOA of ±0.208. This study demonstrates promising utility for software using a game design to enable automated testing of acuity in children with ophthalmic disease in an objective and accurate manner. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Automated egg grading system using computer vision: Investigation on weight measure versus shape parameters

    NASA Astrophysics Data System (ADS)

    Nasir, Ahmad Fakhri Ab; Suhaila Sabarudin, Siti; Majeed, Anwar P. P. Abdul; Ghani, Ahmad Shahrizan Abdul

    2018-04-01

    Chicken egg is a source of food of high demand by humans. Human operators cannot work perfectly and continuously when conducting egg grading. Instead of an egg grading system using weight measure, an automatic system for egg grading using computer vision (using egg shape parameter) can be used to improve the productivity of egg grading. However, early hypothesis has indicated that more number of egg classes will change when using egg shape parameter compared with using weight measure. This paper presents the comparison of egg classification by the two above-mentioned methods. Firstly, 120 images of chicken eggs of various grades (A–D) produced in Malaysia are captured. Then, the egg images are processed using image pre-processing techniques, such as image cropping, smoothing and segmentation. Thereafter, eight egg shape features, including area, major axis length, minor axis length, volume, diameter and perimeter, are extracted. Lastly, feature selection (information gain ratio) and feature extraction (principal component analysis) are performed using k-nearest neighbour classifier in the classification process. Two methods, namely, supervised learning (using weight measure as graded by egg supplier) and unsupervised learning (using egg shape parameters as graded by ourselves), are conducted to execute the experiment. Clustering results reveal many changes in egg classes after performing shape-based grading. On average, the best recognition results using shape-based grading label is 94.16% while using weight-based label is 44.17%. As conclusion, automated egg grading system using computer vision is better by implementing shape-based features since it uses image meanwhile the weight parameter is more suitable by using weight grading system.

  10. Flight data acquisition methodology for validation of passive ranging algorithms for obstacle avoidance

    NASA Technical Reports Server (NTRS)

    Smith, Phillip N.

    1990-01-01

    The automation of low-altitude rotorcraft flight depends on the ability to detect, locate, and navigate around obstacles lying in the rotorcraft's intended flightpath. Computer vision techniques provide a passive method of obstacle detection and range estimation, for obstacle avoidance. Several algorithms based on computer vision methods have been developed for this purpose using laboratory data; however, further development and validation of candidate algorithms require data collected from rotorcraft flight. A data base containing low-altitude imagery augmented with the rotorcraft and sensor parameters required for passive range estimation is not readily available. Here, the emphasis is on the methodology used to develop such a data base from flight-test data consisting of imagery, rotorcraft and sensor parameters, and ground-truth range measurements. As part of the data preparation, a technique for obtaining the sensor calibration parameters is described. The data base will enable the further development of algorithms for computer vision-based obstacle detection and passive range estimation, as well as provide a benchmark for verification of range estimates against ground-truth measurements.

  11. Automated Measurement of Facial Expression in Infant-Mother Interaction: A Pilot Study

    PubMed Central

    Messinger, Daniel S.; Mahoor, Mohammad H.; Chow, Sy-Miin; Cohn, Jeffrey F.

    2009-01-01

    Automated facial measurement using computer vision has the potential to objectively document continuous changes in behavior. To examine emotional expression and communication, we used automated measurements to quantify smile strength, eye constriction, and mouth opening in two six-month-old/mother dyads who each engaged in a face-to-face interaction. Automated measurements showed high associations with anatomically based manual coding (concurrent validity); measurements of smiling showed high associations with mean ratings of positive emotion made by naive observers (construct validity). For both infants and mothers, smile strength and eye constriction (the Duchenne marker) were correlated over time, creating a continuous index of smile intensity. Infant and mother smile activity exhibited changing (nonstationary) local patterns of association, suggesting the dyadic repair and dissolution of states of affective synchrony. The study provides insights into the potential and limitations of automated measurement of facial action. PMID:19885384

  12. Vision-guided gripping of a cylinder

    NASA Technical Reports Server (NTRS)

    Nicewarner, Keith E.; Kelley, Robert B.

    1991-01-01

    The motivation for vision-guided servoing is taken from tasks in automated or telerobotic space assembly and construction. Vision-guided servoing requires the ability to perform rapid pose estimates and provide predictive feature tracking. Monocular information from a gripper-mounted camera is used to servo the gripper to grasp a cylinder. The procedure is divided into recognition and servo phases. The recognition stage verifies the presence of a cylinder in the camera field of view. Then an initial pose estimate is computed and uncluttered scan regions are selected. The servo phase processes only the selected scan regions of the image. Given the knowledge, from the recognition phase, that there is a cylinder in the image and knowing the radius of the cylinder, 4 of the 6 pose parameters can be estimated with minimal computation. The relative motion of the cylinder is obtained by using the current pose and prior pose estimates. The motion information is then used to generate a predictive feature-based trajectory for the path of the gripper.

  13. Machine Vision Systems for Processing Hardwood Lumber and Logs

    Treesearch

    Philip A. Araman; Daniel L. Schmoldt; Tai-Hoon Cho; Dongping Zhu; Richard W. Conners; D. Earl Kline

    1992-01-01

    Machine vision and automated processing systems are under development at Virginia Tech University with support and cooperation from the USDA Forest Service. Our goals are to help U.S. hardwood producers automate, reduce costs, increase product volume and value recovery, and market higher value, more accurately graded and described products. Any vision system is...

  14. Unsupervised Decoding of Long-Term, Naturalistic Human Neural Recordings with Automated Video and Audio Annotations

    PubMed Central

    Wang, Nancy X. R.; Olson, Jared D.; Ojemann, Jeffrey G.; Rao, Rajesh P. N.; Brunton, Bingni W.

    2016-01-01

    Fully automated decoding of human activities and intentions from direct neural recordings is a tantalizing challenge in brain-computer interfacing. Implementing Brain Computer Interfaces (BCIs) outside carefully controlled experiments in laboratory settings requires adaptive and scalable strategies with minimal supervision. Here we describe an unsupervised approach to decoding neural states from naturalistic human brain recordings. We analyzed continuous, long-term electrocorticography (ECoG) data recorded over many days from the brain of subjects in a hospital room, with simultaneous audio and video recordings. We discovered coherent clusters in high-dimensional ECoG recordings using hierarchical clustering and automatically annotated them using speech and movement labels extracted from audio and video. To our knowledge, this represents the first time techniques from computer vision and speech processing have been used for natural ECoG decoding. Interpretable behaviors were decoded from ECoG data, including moving, speaking and resting; the results were assessed by comparison with manual annotation. Discovered clusters were projected back onto the brain revealing features consistent with known functional areas, opening the door to automated functional brain mapping in natural settings. PMID:27148018

  15. [A study of biomechanical method for urine test based on color difference estimation].

    PubMed

    Wang, Chunhong; Zhou, Yue; Zhao, Hongxia; Zhou, Fengkun

    2008-02-01

    The biochemical analysis of urine is an important inspection and diagnosis method in hospitals. The conventional method of urine analysis covers mainly colorimetric visual appraisement and automation detection, in which the colorimetric visual appraisement technique has been superseded basically, and the automation detection method is adopted in hospital; moreover, the price of urine biochemical analyzer on market is around twenty thousand RMB yuan (Y), which is hard to enter into ordinary families. It is known that computer vision system is not subject to the physiological and psychological influence of person, its appraisement standard is objective and steady. Therefore, according to the color theory, we have established a computer vision system, which can carry through collection, management, display, and appraisement of color difference between the color of standard threshold value and the color of urine test paper after reaction with urine liquid, and then the level of an illness can be judged accurately. In this paper, we introduce the Urine Test Biochemical Analysis method, which is new and can be popularized in families. Experimental result shows that this test method is easy-to-use and cost-effective. It can realize the monitoring of a whole course and can find extensive applications.

  16. Automated surface inspection for steel products using computer vision approach.

    PubMed

    Xi, Jiaqi; Shentu, Lifeng; Hu, Jikang; Li, Mian

    2017-01-10

    Surface inspection is a critical step in ensuring the product quality in the steel-making industry. In order to relieve inspectors of laborious work and improve the consistency of inspection, much effort has been dedicated to the automated inspection using computer vision approaches over the past decades. However, due to non-uniform illumination conditions and similarity between the surface textures and defects, the present methods are usually applicable to very specific cases. In this paper a new framework for surface inspection has been proposed to overcome these limitations. By investigating the image formation process, a quantitative model characterizing the impact of illumination on the image quality is developed, based on which the non-uniform brightness in the image can be effectively removed. Then a simple classifier is designed to identify the defects among the surface textures. The significance of this approach lies in its robustness to illumination changes and wide applicability to different inspection scenarios. The proposed approach has been successfully applied to the real-time surface inspection of round billets in real manufacturing. Implemented on a conventional industrial PC, the algorithm can proceed at 12.5 frames per second with the successful detection rate being over 90% for turned and skinned billets.

  17. Task-oriented situation recognition

    NASA Astrophysics Data System (ADS)

    Bauer, Alexander; Fischer, Yvonne

    2010-04-01

    From the advances in computer vision methods for the detection, tracking and recognition of objects in video streams, new opportunities for video surveillance arise: In the future, automated video surveillance systems will be able to detect critical situations early enough to enable an operator to take preventive actions, instead of using video material merely for forensic investigations. However, problems such as limited computational resources, privacy regulations and a constant change in potential threads have to be addressed by a practical automated video surveillance system. In this paper, we show how these problems can be addressed using a task-oriented approach. The system architecture of the task-oriented video surveillance system NEST and an algorithm for the detection of abnormal behavior as part of the system are presented and illustrated for the surveillance of guests inside a video-monitored building.

  18. Compact Microscope Imaging System with Intelligent Controls

    NASA Technical Reports Server (NTRS)

    McDowell, Mark

    2004-01-01

    The figure presents selected views of a compact microscope imaging system (CMIS) that includes a miniature video microscope, a Cartesian robot (a computer- controlled three-dimensional translation stage), and machine-vision and control subsystems. The CMIS was built from commercial off-the-shelf instrumentation, computer hardware and software, and custom machine-vision software. The machine-vision and control subsystems include adaptive neural networks that afford a measure of artificial intelligence. The CMIS can perform several automated tasks with accuracy and repeatability . tasks that, heretofore, have required the full attention of human technicians using relatively bulky conventional microscopes. In addition, the automation and control capabilities of the system inherently include a capability for remote control. Unlike human technicians, the CMIS is not at risk of becoming fatigued or distracted: theoretically, it can perform continuously at the level of the best human technicians. In its capabilities for remote control and for relieving human technicians of tedious routine tasks, the CMIS is expected to be especially useful in biomedical research, materials science, inspection of parts on industrial production lines, and space science. The CMIS can automatically focus on and scan a microscope sample, find areas of interest, record the resulting images, and analyze images from multiple samples simultaneously. Automatic focusing is an iterative process: The translation stage is used to move the microscope along its optical axis in a succession of coarse, medium, and fine steps. A fast Fourier transform (FFT) of the image is computed at each step, and the FFT is analyzed for its spatial-frequency content. The microscope position that results in the greatest dispersal of FFT content toward high spatial frequencies (indicating that the image shows the greatest amount of detail) is deemed to be the focal position.

  19. Color image processing and vision system for an automated laser paint-stripping system

    NASA Astrophysics Data System (ADS)

    Hickey, John M., III; Hise, Lawson

    1994-10-01

    Color image processing in machine vision systems has not gained general acceptance. Most machine vision systems use images that are shades of gray. The Laser Automated Decoating System (LADS) required a vision system which could discriminate between substrates of various colors and textures and paints ranging from semi-gloss grays to high gloss red, white and blue (Air Force Thunderbirds). The changing lighting levels produced by the pulsed CO2 laser mandated a vision system that did not require a constant color temperature lighting for reliable image analysis.

  20. Computer vision and soft computing for automatic skull-face overlay in craniofacial superimposition.

    PubMed

    Campomanes-Álvarez, B Rosario; Ibáñez, O; Navarro, F; Alemán, I; Botella, M; Damas, S; Cordón, O

    2014-12-01

    Craniofacial superimposition can provide evidence to support that some human skeletal remains belong or not to a missing person. It involves the process of overlaying a skull with a number of ante mortem images of an individual and the analysis of their morphological correspondence. Within the craniofacial superimposition process, the skull-face overlay stage just focuses on achieving the best possible overlay of the skull and a single ante mortem image of the suspect. Although craniofacial superimposition has been in use for over a century, skull-face overlay is still applied by means of a trial-and-error approach without an automatic method. Practitioners finish the process once they consider that a good enough overlay has been attained. Hence, skull-face overlay is a very challenging, subjective, error prone, and time consuming part of the whole process. Though the numerical assessment of the method quality has not been achieved yet, computer vision and soft computing arise as powerful tools to automate it, dramatically reducing the time taken by the expert and obtaining an unbiased overlay result. In this manuscript, we justify and analyze the use of these techniques to properly model the skull-face overlay problem. We also present the automatic technical procedure we have developed using these computational methods and show the four overlays obtained in two craniofacial superimposition cases. This automatic procedure can be thus considered as a tool to aid forensic anthropologists to develop the skull-face overlay, automating and avoiding subjectivity of the most tedious task within craniofacial superimposition. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  1. Computer vision based nacre thickness measurement of Tahitian pearls

    NASA Astrophysics Data System (ADS)

    Loesdau, Martin; Chabrier, Sébastien; Gabillon, Alban

    2017-03-01

    The Tahitian Pearl is the most valuable export product of French Polynesia contributing with over 61 million Euros to more than 50% of the total export income. To maintain its excellent reputation on the international market, an obligatory quality control for every pearl deemed for exportation has been established by the local government. One of the controlled quality parameters is the pearls nacre thickness. The evaluation is currently done manually by experts that are visually analyzing X-ray images of the pearls. In this article, a computer vision based approach to automate this procedure is presented. Even though computer vision based approaches for pearl nacre thickness measurement exist in the literature, the very specific features of the Tahitian pearl, namely the large shape variety and the occurrence of cavities, have so far not been considered. The presented work closes the. Our method consists of segmenting the pearl from X-ray images with a model-based approach, segmenting the pearls nucleus with an own developed heuristic circle detection and segmenting possible cavities with region growing. Out of the obtained boundaries, the 2-dimensional nacre thickness profile can be calculated. A certainty measurement to consider imaging and segmentation imprecisions is included in the procedure. The proposed algorithms are tested on 298 manually evaluated Tahitian pearls, showing that it is generally possible to automatically evaluate the nacre thickness of Tahitian pearls with computer vision. Furthermore the results show that the automatic measurement is more precise and faster than the manual one.

  2. Computer vision cracks the leaf code

    PubMed Central

    Wilf, Peter; Zhang, Shengping; Chikkerur, Sharat; Little, Stefan A.; Wing, Scott L.; Serre, Thomas

    2016-01-01

    Understanding the extremely variable, complex shape and venation characters of angiosperm leaves is one of the most challenging problems in botany. Machine learning offers opportunities to analyze large numbers of specimens, to discover novel leaf features of angiosperm clades that may have phylogenetic significance, and to use those characters to classify unknowns. Previous computer vision approaches have primarily focused on leaf identification at the species level. It remains an open question whether learning and classification are possible among major evolutionary groups such as families and orders, which usually contain hundreds to thousands of species each and exhibit many times the foliar variation of individual species. Here, we tested whether a computer vision algorithm could use a database of 7,597 leaf images from 2,001 genera to learn features of botanical families and orders, then classify novel images. The images are of cleared leaves, specimens that are chemically bleached, then stained to reveal venation. Machine learning was used to learn a codebook of visual elements representing leaf shape and venation patterns. The resulting automated system learned to classify images into families and orders with a success rate many times greater than chance. Of direct botanical interest, the responses of diagnostic features can be visualized on leaf images as heat maps, which are likely to prompt recognition and evolutionary interpretation of a wealth of novel morphological characters. With assistance from computer vision, leaves are poised to make numerous new contributions to systematic and paleobotanical studies. PMID:26951664

  3. Unstructured Grid Adaptation: Status, Potential Impacts, and Recommended Investments Toward CFD Vision 2030

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Krakos, Joshua A.; Michal, Todd; Loseille, Adrien; Alonso, Juan J.

    2016-01-01

    Unstructured grid adaptation is a powerful tool to control discretization error for Computational Fluid Dynamics (CFD). It has enabled key increases in the accuracy, automation, and capacity of some fluid simulation applications. Slotnick et al. provides a number of case studies in the CFD Vision 2030 Study: A Path to Revolutionary Computational Aerosciences to illustrate the current state of CFD capability and capacity. The authors forecast the potential impact of emerging High Performance Computing (HPC) environments forecast in the year 2030 and identify that mesh generation and adaptivity continue to be significant bottlenecks in the CFD work flow. These bottlenecks may persist because very little government investment has been targeted in these areas. To motivate investment, the impacts of improved grid adaptation technologies are identified. The CFD Vision 2030 Study roadmap and anticipated capabilities in complementary disciplines are quoted to provide context for the progress made in grid adaptation in the past fifteen years, current status, and a forecast for the next fifteen years with recommended investments. These investments are specific to mesh adaptation and impact other aspects of the CFD process. Finally, a strategy is identified to diffuse grid adaptation technology into production CFD work flows.

  4. Near real-time, on-the-move software PED using VPEF

    NASA Astrophysics Data System (ADS)

    Green, Kevin; Geyer, Chris; Burnette, Chris; Agarwal, Sanjeev; Swett, Bruce; Phan, Chung; Deterline, Diane

    2015-05-01

    The scope of the Micro-Cloud for Operational, Vehicle-Based EO-IR Reconnaissance System (MOVERS) development effort, managed by the Night Vision and Electronic Sensors Directorate (NVESD), is to develop, integrate, and demonstrate new sensor technologies and algorithms that improve improvised device/mine detection using efficient and effective exploitation and fusion of sensor data and target cues from existing and future Route Clearance Package (RCP) sensor systems. Unfortunately, the majority of forward looking Full Motion Video (FMV) and computer vision processing, exploitation, and dissemination (PED) algorithms are often developed using proprietary, incompatible software. This makes the insertion of new algorithms difficult due to the lack of standardized processing chains. In order to overcome these limitations, EOIR developed the Government off-the-shelf (GOTS) Video Processing and Exploitation Framework (VPEF) to be able to provide standardized interfaces (e.g., input/output video formats, sensor metadata, and detected objects) for exploitation software and to rapidly integrate and test computer vision algorithms. EOIR developed a vehicle-based computing framework within the MOVERS and integrated it with VPEF. VPEF was further enhanced for automated processing, detection, and publishing of detections in near real-time, thus improving the efficiency and effectiveness of RCP sensor systems.

  5. Performance of computer vision in vivo flow cytometry with low fluorescence contrast

    NASA Astrophysics Data System (ADS)

    Markovic, Stacey; Li, Siyuan; Niedre, Mark

    2015-03-01

    Detection and enumeration of circulating cells in the bloodstream of small animals are important in many areas of preclinical biomedical research, including cancer metastasis, immunology, and reproductive medicine. Optical in vivo flow cytometry (IVFC) represents a class of technologies that allow noninvasive and continuous enumeration of circulating cells without drawing blood samples. We recently developed a technique termed computer vision in vivo flow cytometry (CV-IVFC) that uses a high-sensitivity fluorescence camera and an automated computer vision algorithm to interrogate relatively large circulating blood volumes in the ear of a mouse. We detected circulating cells at concentrations as low as 20 cells/mL. In the present work, we characterized the performance of CV-IVFC with low-contrast imaging conditions with (1) weak cell fluorescent labeling using cell-simulating fluorescent microspheres with varying brightness and (2) high background tissue autofluorescence by varying autofluorescence properties of optical phantoms. Our analysis indicates that CV-IVFC can robustly track and enumerate circulating cells with at least 50% sensitivity even in conditions with two orders of magnitude degraded contrast than our previous in vivo work. These results support the significant potential utility of CV-IVFC in a wide range of in vivo biological models.

  6. The Mark 3 Haploscope

    NASA Technical Reports Server (NTRS)

    Decker, T. A.; Williams, R. E.; Kuether, C. L.; Logar, N. D.; Wyman-Cornsweet, D.

    1975-01-01

    A computer-operated binocular vision testing device was developed as one part of a system designed for NASA to evaluate the visual function of astronauts during spaceflight. This particular device, called the Mark 3 Haploscope, employs semi-automated psychophysical test procedures to measure visual acuity, stereopsis, phoria, fixation disparity, refractive state and accommodation/convergence relationships. Test procedures are self-administered and can be used repeatedly without subject memorization. The Haploscope was designed as one module of the complete NASA Vision Testing System. However, it is capable of stand-alone operation. Moreover, the compactness and portability of the Haploscope make possible its use in a broad variety of testing environments.

  7. The robot's eyes - Stereo vision system for automated scene analysis

    NASA Technical Reports Server (NTRS)

    Williams, D. S.

    1977-01-01

    Attention is given to the robot stereo vision system which maintains the image produced by solid-state detector television cameras in a dynamic random access memory called RAPID. The imaging hardware consists of sensors (two solid-state image arrays using a charge injection technique), a video-rate analog-to-digital converter, the RAPID memory, and various types of computer-controlled displays, and preprocessing equipment (for reflexive actions, processing aids, and object detection). The software is aimed at locating objects and transversibility. An object-tracking algorithm is discussed and it is noted that tracking speed is in the 50-75 pixels/s range.

  8. Computer vision techniques for rotorcraft low-altitude flight

    NASA Technical Reports Server (NTRS)

    Sridhar, Banavar; Cheng, Victor H. L.

    1988-01-01

    A description is given of research that applies techniques from computer vision to automation of rotorcraft navigation. The effort emphasizes the development of a methodology for detecting the ranges to obstacles in the region of interest based on the maximum utilization of passive sensors. The range map derived from the obstacle detection approach can be used as obstacle data for the obstacle avoidance in an automataic guidance system and as advisory display to the pilot. The lack of suitable flight imagery data, however, presents a problem in the verification of concepts for obstacle detection. This problem is being addressed by the development of an adequate flight database and by preprocessing of currently available flight imagery. Some comments are made on future work and how research in this area relates to the guidance of other autonomous vehicles.

  9. Automated measurement of human body shape and curvature using computer vision

    NASA Astrophysics Data System (ADS)

    Pearson, Jeremy D.; Hobson, Clifford A.; Dangerfield, Peter H.

    1993-06-01

    A system to measure the surface shape of the human body has been constructed. The system uses a fringe pattern generated by projection of multi-stripe structured light. The optical methodology used is fully described and the algorithms used to process acquired digital images are outlined. The system has been applied to the measurement of the shape of the human back in scoliosis.

  10. A malaria diagnostic tool based on computer vision screening and visualization of Plasmodium falciparum candidate areas in digitized blood smears.

    PubMed

    Linder, Nina; Turkki, Riku; Walliander, Margarita; Mårtensson, Andreas; Diwan, Vinod; Rahtu, Esa; Pietikäinen, Matti; Lundin, Mikael; Lundin, Johan

    2014-01-01

    Microscopy is the gold standard for diagnosis of malaria, however, manual evaluation of blood films is highly dependent on skilled personnel in a time-consuming, error-prone and repetitive process. In this study we propose a method using computer vision detection and visualization of only the diagnostically most relevant sample regions in digitized blood smears. Giemsa-stained thin blood films with P. falciparum ring-stage trophozoites (n = 27) and uninfected controls (n = 20) were digitally scanned with an oil immersion objective (0.1 µm/pixel) to capture approximately 50,000 erythrocytes per sample. Parasite candidate regions were identified based on color and object size, followed by extraction of image features (local binary patterns, local contrast and Scale-invariant feature transform descriptors) used as input to a support vector machine classifier. The classifier was trained on digital slides from ten patients and validated on six samples. The diagnostic accuracy was tested on 31 samples (19 infected and 12 controls). From each digitized area of a blood smear, a panel with the 128 most probable parasite candidate regions was generated. Two expert microscopists were asked to visually inspect the panel on a tablet computer and to judge whether the patient was infected with P. falciparum. The method achieved a diagnostic sensitivity and specificity of 95% and 100% as well as 90% and 100% for the two readers respectively using the diagnostic tool. Parasitemia was separately calculated by the automated system and the correlation coefficient between manual and automated parasitemia counts was 0.97. We developed a decision support system for detecting malaria parasites using a computer vision algorithm combined with visualization of sample areas with the highest probability of malaria infection. The system provides a novel method for blood smear screening with a significantly reduced need for visual examination and has a potential to increase the throughput in malaria diagnostics.

  11. Garment Counting in a Textile Warehouse by Means of a Laser Imaging System

    PubMed Central

    Martínez-Sala, Alejandro Santos; Sánchez-Aartnoutse, Juan Carlos; Egea-López, Esteban

    2013-01-01

    Textile logistic warehouses are highly automated mechanized places where control points are needed to count and validate the number of garments in each batch. This paper proposes and describes a low cost and small size automated system designed to count the number of garments by processing an image of the corresponding hanger hooks generated using an array of phototransistors sensors and a linear laser beam. The generated image is processed using computer vision techniques to infer the number of garment units. The system has been tested on two logistic warehouses with a mean error in the estimated number of hangers of 0.13%. PMID:23628760

  12. Garment counting in a textile warehouse by means of a laser imaging system.

    PubMed

    Martínez-Sala, Alejandro Santos; Sánchez-Aartnoutse, Juan Carlos; Egea-López, Esteban

    2013-04-29

    Textile logistic warehouses are highly automated mechanized places where control points are needed to count and validate the number of garments in each batch. This paper proposes and describes a low cost and small size automated system designed to count the number of garments by processing an image of the corresponding hanger hooks generated using an array of phototransistors sensors and a linear laser beam. The generated image is processed using computer vision techniques to infer the number of garment units. The system has been tested on two logistic warehouses with a mean error in the estimated number of hangers of 0.13%.

  13. Visions of Automation and Realities of Certification

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J.; Holloway, Michael C.

    2005-01-01

    Quite a lot of people envision automation as the solution to many of the problems in aviation and air transportation today, across all sectors: commercial, private, and military. This paper explains why some recent experiences with complex, highly-integrated, automated systems suggest that this vision will not be realized unless significant progress is made over the current state-of-the-practice in software system development and certification.

  14. The H-Metaphor as a Guideline for Vehicle Automation and Interaction

    NASA Technical Reports Server (NTRS)

    Flemisch, Frank O.; Adams, Catherine A.; Conway, Sheila R.; Goodrich, Ken H.; Palmer, Michael T.; Schutte, Paul C.

    2003-01-01

    Good design is not free of form. It does not necessarily happen through a mere sampling of technologies packaged together, through pure analysis, or just by following procedures. Good design begins with inspiration and a vision, a mental image of the end product, which can sometimes be described with a design metaphor. A successful example from the 20th century is the desktop metaphor, which took a real desktop as an orientation for the manipulation of electronic documents on a computer. Initially defined by Xerox, then refined by Apple and others, it could be found on almost every computer by the turn of the 20th century. This paper sketches a specific metaphor for the emerging field of highly automated vehicles, their interactions with human users and with other vehicles. In the introduction, general questions on vehicle automation are raised and related to the physical control of conventional vehicles and to the automation of some late 20th century vehicles. After some words on design metaphors, the H-Metaphor is introduced. More details of the metaphor's source are described and their application to human-machine interaction, automation and management of intelligent vehicles sketched. Finally, risks and opportunities to apply the metaphor to technical applications are discussed.

  15. Center of Excellence in Aerospace Manufacturing Automation

    DTIC Science & Technology

    1983-11-01

    affiliated industrial companies, who will pi,)vide financial support and ongoing guidance to the Institute. SIMA will encompass the design and management ...tactile sensing, intelligent systems for robot task management , and computer vision for robot management . We are addressing the question of how to provide...than anything today’s control systems could stably manage . To do this we have begun to develop a sequen- tial family of new manipulators that are

  16. Three-dimensional measurement of small inner surface profiles using feature-based 3-D panoramic registration

    PubMed Central

    Gong, Yuanzheng; Seibel, Eric J.

    2017-01-01

    Rapid development in the performance of sophisticated optical components, digital image sensors, and computer abilities along with decreasing costs has enabled three-dimensional (3-D) optical measurement to replace more traditional methods in manufacturing and quality control. The advantages of 3-D optical measurement, such as noncontact, high accuracy, rapid operation, and the ability for automation, are extremely valuable for inline manufacturing. However, most of the current optical approaches are eligible for exterior instead of internal surfaces of machined parts. A 3-D optical measurement approach is proposed based on machine vision for the 3-D profile measurement of tiny complex internal surfaces, such as internally threaded holes. To capture the full topographic extent (peak to valley) of threads, a side-view commercial rigid scope is used to collect images at known camera positions and orientations. A 3-D point cloud is generated with multiview stereo vision using linear motion of the test piece, which is repeated by a rotation to form additional point clouds. Registration of these point clouds into a complete reconstruction uses a proposed automated feature-based 3-D registration algorithm. The resulting 3-D reconstruction is compared with x-ray computed tomography to validate the feasibility of our proposed method for future robotically driven industrial 3-D inspection. PMID:28286351

  17. Three-dimensional measurement of small inner surface profiles using feature-based 3-D panoramic registration

    NASA Astrophysics Data System (ADS)

    Gong, Yuanzheng; Seibel, Eric J.

    2017-01-01

    Rapid development in the performance of sophisticated optical components, digital image sensors, and computer abilities along with decreasing costs has enabled three-dimensional (3-D) optical measurement to replace more traditional methods in manufacturing and quality control. The advantages of 3-D optical measurement, such as noncontact, high accuracy, rapid operation, and the ability for automation, are extremely valuable for inline manufacturing. However, most of the current optical approaches are eligible for exterior instead of internal surfaces of machined parts. A 3-D optical measurement approach is proposed based on machine vision for the 3-D profile measurement of tiny complex internal surfaces, such as internally threaded holes. To capture the full topographic extent (peak to valley) of threads, a side-view commercial rigid scope is used to collect images at known camera positions and orientations. A 3-D point cloud is generated with multiview stereo vision using linear motion of the test piece, which is repeated by a rotation to form additional point clouds. Registration of these point clouds into a complete reconstruction uses a proposed automated feature-based 3-D registration algorithm. The resulting 3-D reconstruction is compared with x-ray computed tomography to validate the feasibility of our proposed method for future robotically driven industrial 3-D inspection.

  18. The semantic web and computer vision: old AI meets new AI

    NASA Astrophysics Data System (ADS)

    Mundy, J. L.; Dong, Y.; Gilliam, A.; Wagner, R.

    2018-04-01

    There has been vast process in linking semantic information across the billions of web pages through the use of ontologies encoded in the Web Ontology Language (OWL) based on the Resource Description Framework (RDF). A prime example is the Wikipedia where the knowledge contained in its more than four million pages is encoded in an ontological database called DBPedia http://wiki.dbpedia.org/. Web-based query tools can retrieve semantic information from DBPedia encoded in interlinked ontologies that can be accessed using natural language. This paper will show how this vast context can be used to automate the process of querying images and other geospatial data in support of report changes in structures and activities. Computer vision algorithms are selected and provided with context based on natural language requests for monitoring and analysis. The resulting reports provide semantically linked observations from images and 3D surface models.

  19. SU-F-I-45: An Automated Technique to Measure Image Contrast in Clinical CT Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanders, J; Abadi, E; Meng, B

    Purpose: To develop and validate an automated technique for measuring image contrast in chest computed tomography (CT) exams. Methods: An automated computer algorithm was developed to measure the distribution of Hounsfield units (HUs) inside four major organs: the lungs, liver, aorta, and bones. These organs were first segmented or identified using computer vision and image processing techniques. Regions of interest (ROIs) were automatically placed inside the lungs, liver, and aorta and histograms of the HUs inside the ROIs were constructed. The mean and standard deviation of each histogram were computed for each CT dataset. Comparison of the mean and standardmore » deviation of the HUs in the different organs provides different contrast values. The ROI for the bones is simply the segmentation mask of the bones. Since the histogram for bones does not follow a Gaussian distribution, the 25th and 75th percentile were computed instead of the mean. The sensitivity and accuracy of the algorithm was investigated by comparing the automated measurements with manual measurements. Fifteen contrast enhanced and fifteen non-contrast enhanced chest CT clinical datasets were examined in the validation procedure. Results: The algorithm successfully measured the histograms of the four organs in both contrast and non-contrast enhanced chest CT exams. The automated measurements were in agreement with manual measurements. The algorithm has sufficient sensitivity as indicated by the near unity slope of the automated versus manual measurement plots. Furthermore, the algorithm has sufficient accuracy as indicated by the high coefficient of determination, R2, values ranging from 0.879 to 0.998. Conclusion: Patient-specific image contrast can be measured from clinical datasets. The algorithm can be run on both contrast enhanced and non-enhanced clinical datasets. The method can be applied to automatically assess the contrast characteristics of clinical chest CT images and quantify dependencies that may not be captured in phantom data.« less

  20. Towards an Autonomic Cluster Management System (ACMS) with Reflex Autonomicity

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walt; Hinchey, Mike; Sterritt, Roy

    2005-01-01

    Cluster computing, whereby a large number of simple processors or nodes are combined together to apparently function as a single powerful computer, has emerged as a research area in its own right. The approach offers a relatively inexpensive means of providing a fault-tolerant environment and achieving significant computational capabilities for high-performance computing applications. However, the task of manually managing and configuring a cluster quickly becomes daunting as the cluster grows in size. Autonomic computing, with its vision to provide self-management, can potentially solve many of the problems inherent in cluster management. We describe the development of a prototype Autonomic Cluster Management System (ACMS) that exploits autonomic properties in automating cluster management and its evolution to include reflex reactions via pulse monitoring.

  1. An automated machine vision system for the histological grading of cervical intraepithelial neoplasia (CIN).

    PubMed

    Keenan, S J; Diamond, J; McCluggage, W G; Bharucha, H; Thompson, D; Bartels, P H; Hamilton, P W

    2000-11-01

    The histological grading of cervical intraepithelial neoplasia (CIN) remains subjective, resulting in inter- and intra-observer variation and poor reproducibility in the grading of cervical lesions. This study has attempted to develop an objective grading system using automated machine vision. The architectural features of cervical squamous epithelium are quantitatively analysed using a combination of computerized digital image processing and Delaunay triangulation analysis; 230 images digitally captured from cases previously classified by a gynaecological pathologist included normal cervical squamous epithelium (n=30), koilocytosis (n=46), CIN 1 (n=52), CIN 2 (n=56), and CIN 3 (n=46). Intra- and inter-observer variation had kappa values of 0.502 and 0.415, respectively. A machine vision system was developed in KS400 macro programming language to segment and mark the centres of all nuclei within the epithelium. By object-oriented analysis of image components, the positional information of nuclei was used to construct a Delaunay triangulation mesh. Each mesh was analysed to compute triangle dimensions including the mean triangle area, the mean triangle edge length, and the number of triangles per unit area, giving an individual quantitative profile of measurements for each case. Discriminant analysis of the geometric data revealed the significant discriminatory variables from which a classification score was derived. The scoring system distinguished between normal and CIN 3 in 98.7% of cases and between koilocytosis and CIN 1 in 76.5% of cases, but only 62.3% of the CIN cases were classified into the correct group, with the CIN 2 group showing the highest rate of misclassification. Graphical plots of triangulation data demonstrated the continuum of morphological change from normal squamous epithelium to the highest grade of CIN, with overlapping of the groups originally defined by the pathologists. This study shows that automated location of nuclei in cervical biopsies using computerized image analysis is possible. Analysis of positional information enables quantitative evaluation of architectural features in CIN using Delaunay triangulation meshes, which is effective in the objective classification of CIN. This demonstrates the future potential of automated machine vision systems in diagnostic histopathology. Copyright 2000 John Wiley & Sons, Ltd.

  2. Performance of computer vision in vivo flow cytometry with low fluorescence contrast

    PubMed Central

    Markovic, Stacey; Li, Siyuan; Niedre, Mark

    2015-01-01

    Abstract. Detection and enumeration of circulating cells in the bloodstream of small animals are important in many areas of preclinical biomedical research, including cancer metastasis, immunology, and reproductive medicine. Optical in vivo flow cytometry (IVFC) represents a class of technologies that allow noninvasive and continuous enumeration of circulating cells without drawing blood samples. We recently developed a technique termed computer vision in vivo flow cytometry (CV-IVFC) that uses a high-sensitivity fluorescence camera and an automated computer vision algorithm to interrogate relatively large circulating blood volumes in the ear of a mouse. We detected circulating cells at concentrations as low as 20  cells/mL. In the present work, we characterized the performance of CV-IVFC with low-contrast imaging conditions with (1) weak cell fluorescent labeling using cell-simulating fluorescent microspheres with varying brightness and (2) high background tissue autofluorescence by varying autofluorescence properties of optical phantoms. Our analysis indicates that CV-IVFC can robustly track and enumerate circulating cells with at least 50% sensitivity even in conditions with two orders of magnitude degraded contrast than our previous in vivo work. These results support the significant potential utility of CV-IVFC in a wide range of in vivo biological models. PMID:25822954

  3. Automated design of image operators that detect interest points.

    PubMed

    Trujillo, Leonardo; Olague, Gustavo

    2008-01-01

    This work describes how evolutionary computation can be used to synthesize low-level image operators that detect interesting points on digital images. Interest point detection is an essential part of many modern computer vision systems that solve tasks such as object recognition, stereo correspondence, and image indexing, to name but a few. The design of the specialized operators is posed as an optimization/search problem that is solved with genetic programming (GP), a strategy still mostly unexplored by the computer vision community. The proposed approach automatically synthesizes operators that are competitive with state-of-the-art designs, taking into account an operator's geometric stability and the global separability of detected points during fitness evaluation. The GP search space is defined using simple primitive operations that are commonly found in point detectors proposed by the vision community. The experiments described in this paper extend previous results (Trujillo and Olague, 2006a,b) by presenting 15 new operators that were synthesized through the GP-based search. Some of the synthesized operators can be regarded as improved manmade designs because they employ well-known image processing techniques and achieve highly competitive performance. On the other hand, since the GP search also generates what can be considered as unconventional operators for point detection, these results provide a new perspective to feature extraction research.

  4. Computer Vision Hardware System for Automating Rough Mills of Furniture Plants

    Treesearch

    Richard W. Conners; Chong T. Ng; Thomas H. Drayer; Joe G. Tront; D. Earl Kline; C.J. Gatchell

    1990-01-01

    The rough mill of a hardwood furniture or fixture plant is the place where dried lumber is cut into the rough parts that will be used in the rest of the manufacturing process. Approximately a third of the cost of operating the rough mill is the cost of the raw material. Hence any increase in the number of rough parts produced from a given volume of raw material can...

  5. A Computer Vision System for Automated Grading of Rough Hardwood Lumber Using a Knowledge-Based Approach

    Treesearch

    Tai-Hoon Cho; Richard W. Conners; Philip A. Araman

    1990-01-01

    A sawmill cuts logs into lumber and sells this lumber to secondary remanufacturers. The price a sawmiller can charge for a volume of lumber depends on its grade. For a number of species the price of a given volume of material can double in going from one grade to the next higher grade. Thus, accurately establishing the grade of a volume of hardwood lumber is very...

  6. Knowledge-based vision for space station object motion detection, recognition, and tracking

    NASA Technical Reports Server (NTRS)

    Symosek, P.; Panda, D.; Yalamanchili, S.; Wehner, W., III

    1987-01-01

    Computer vision, especially color image analysis and understanding, has much to offer in the area of the automation of Space Station tasks such as construction, satellite servicing, rendezvous and proximity operations, inspection, experiment monitoring, data management and training. Knowledge-based techniques improve the performance of vision algorithms for unstructured environments because of their ability to deal with imprecise a priori information or inaccurately estimated feature data and still produce useful results. Conventional techniques using statistical and purely model-based approaches lack flexibility in dealing with the variabilities anticipated in the unstructured viewing environment of space. Algorithms developed under NASA sponsorship for Space Station applications to demonstrate the value of a hypothesized architecture for a Video Image Processor (VIP) are presented. Approaches to the enhancement of the performance of these algorithms with knowledge-based techniques and the potential for deployment of highly-parallel multi-processor systems for these algorithms are discussed.

  7. A cost-effective intelligent robotic system with dual-arm dexterous coordination and real-time vision

    NASA Technical Reports Server (NTRS)

    Marzwell, Neville I.; Chen, Alexander Y. K.

    1991-01-01

    Dexterous coordination of manipulators based on the use of redundant degrees of freedom, multiple sensors, and built-in robot intelligence represents a critical breakthrough in development of advanced manufacturing technology. A cost-effective approach for achieving this new generation of robotics has been made possible by the unprecedented growth of the latest microcomputer and network systems. The resulting flexible automation offers the opportunity to improve the product quality, increase the reliability of the manufacturing process, and augment the production procedures for optimizing the utilization of the robotic system. Moreover, the Advanced Robotic System (ARS) is modular in design and can be upgraded by closely following technological advancements as they occur in various fields. This approach to manufacturing automation enhances the financial justification and ensures the long-term profitability and most efficient implementation of robotic technology. The new system also addresses a broad spectrum of manufacturing demand and has the potential to address both complex jobs as well as highly labor-intensive tasks. The ARS prototype employs the decomposed optimization technique in spatial planning. This technique is implemented to the framework of the sensor-actuator network to establish the general-purpose geometric reasoning system. The development computer system is a multiple microcomputer network system, which provides the architecture for executing the modular network computing algorithms. The knowledge-based approach used in both the robot vision subsystem and the manipulation control subsystems results in the real-time image processing vision-based capability. The vision-based task environment analysis capability and the responsive motion capability are under the command of the local intelligence centers. An array of ultrasonic, proximity, and optoelectronic sensors is used for path planning. The ARS currently has 18 degrees of freedom made up by two articulated arms, one movable robot head, and two charged coupled device (CCD) cameras for producing the stereoscopic views, and articulated cylindrical-type lower body, and an optional mobile base. A functional prototype is demonstrated.

  8. Quality Control by Artificial Vision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lam, Edmond Y.; Gleason, Shaun Scott; Niel, Kurt S.

    2010-01-01

    Computational technology has fundamentally changed many aspects of our lives. One clear evidence is the development of artificial-vision systems, which have effectively automated many manual tasks ranging from quality inspection to quantitative assessment. In many cases, these machine-vision systems are even preferred over manual ones due to their repeatability and high precision. Such advantages come from significant research efforts in advancing sensor technology, illumination, computational hardware, and image-processing algorithms. Similar to the Special Section on Quality Control by Artificial Vision published two years ago in Volume 17, Issue 3 of the Journal of Electronic Imaging, the present one invited papersmore » relevant to fundamental technology improvements to foster quality control by artificial vision, and fine-tuned the technology for specific applications. We aim to balance both theoretical and applied work pertinent to this special section theme. Consequently, we have seven high-quality papers resulting from the stringent peer-reviewing process in place at the Journal of Electronic Imaging. Some of the papers contain extended treatment of the authors work presented at the SPIE Image Processing: Machine Vision Applications conference and the International Conference on Quality Control by Artificial Vision. On the broad application side, Liu et al. propose an unsupervised texture image segmentation scheme. Using a multilayer data condensation spectral clustering algorithm together with wavelet transform, they demonstrate the effectiveness of their approach on both texture and synthetic aperture radar images. A problem related to image segmentation is image extraction. For this, O'Leary et al. investigate the theory of polynomial moments and show how these moments can be compared to classical filters. They also show how to use the discrete polynomial-basis functions for the extraction of 3-D embossed digits, demonstrating superiority over Fourier-basis functions for this task. Image registration is another important task for machine vision. Bingham and Arrowood investigate the implementation and results in applying Fourier phase matching for projection registration, with a particular focus on nondestructive testing using computed tomography. Readers interested in enriching their arsenal of image-processing algorithms for machine-vision tasks should find these papers enriching. Meanwhile, we have four papers dealing with more specific machine-vision tasks. The first one, Yahiaoui et al., is quantitative in nature, using machine vision for real-time passenger counting. Occulsion is a common problem in counting objects and people, and they circumvent this issue with a dense stereovision system, achieving 97 to 99% accuracy in their tests. On the other hand, the second paper by Oswald-Tranta et al. focuses on thermographic crack detection. An infrared camera is used to detect inhomogeneities, which may indicate surface cracks. They describe the various steps in developing fully automated testing equipment aimed at a high throughput. Another paper describing an inspection system is Molleda et al., which handles flatness inspection of rolled products. They employ optical-laser triangulation and 3-D surface reconstruction for this task, showing how these can be achieved in real time. Last but not least, Presles et al. propose a way to monitor the particle-size distribution of batch crystallization processes. This is achieved through a new in situ imaging probe and image-analysis methods. While it is unlikely any reader may be working on these four specific problems at the same time, we are confident that readers will find these papers inspiring and potentially helpful to their own machine-vision system developments.« less

  9. Leaf-GP: an open and automated software application for measuring growth phenotypes for arabidopsis and wheat.

    PubMed

    Zhou, Ji; Applegate, Christopher; Alonso, Albor Dobon; Reynolds, Daniel; Orford, Simon; Mackiewicz, Michal; Griffiths, Simon; Penfield, Steven; Pullen, Nick

    2017-01-01

    Plants demonstrate dynamic growth phenotypes that are determined by genetic and environmental factors. Phenotypic analysis of growth features over time is a key approach to understand how plants interact with environmental change as well as respond to different treatments. Although the importance of measuring dynamic growth traits is widely recognised, available open software tools are limited in terms of batch image processing, multiple traits analyses, software usability and cross-referencing results between experiments, making automated phenotypic analysis problematic. Here, we present Leaf-GP (Growth Phenotypes), an easy-to-use and open software application that can be executed on different computing platforms. To facilitate diverse scientific communities, we provide three software versions, including a graphic user interface (GUI) for personal computer (PC) users, a command-line interface for high-performance computer (HPC) users, and a well-commented interactive Jupyter Notebook (also known as the iPython Notebook) for computational biologists and computer scientists. The software is capable of extracting multiple growth traits automatically from large image datasets. We have utilised it in Arabidopsis thaliana and wheat ( Triticum aestivum ) growth studies at the Norwich Research Park (NRP, UK). By quantifying a number of growth phenotypes over time, we have identified diverse plant growth patterns between different genotypes under several experimental conditions. As Leaf-GP has been evaluated with noisy image series acquired by different imaging devices (e.g. smartphones and digital cameras) and still produced reliable biological outputs, we therefore believe that our automated analysis workflow and customised computer vision based feature extraction software implementation can facilitate a broader plant research community for their growth and development studies. Furthermore, because we implemented Leaf-GP based on open Python-based computer vision, image analysis and machine learning libraries, we believe that our software not only can contribute to biological research, but also demonstrates how to utilise existing open numeric and scientific libraries (e.g. Scikit-image, OpenCV, SciPy and Scikit-learn) to build sound plant phenomics analytic solutions, in a efficient and effective way. Leaf-GP is a sophisticated software application that provides three approaches to quantify growth phenotypes from large image series. We demonstrate its usefulness and high accuracy based on two biological applications: (1) the quantification of growth traits for Arabidopsis genotypes under two temperature conditions; and (2) measuring wheat growth in the glasshouse over time. The software is easy-to-use and cross-platform, which can be executed on Mac OS, Windows and HPC, with open Python-based scientific libraries preinstalled. Our work presents the advancement of how to integrate computer vision, image analysis, machine learning and software engineering in plant phenomics software implementation. To serve the plant research community, our modulated source code, detailed comments, executables (.exe for Windows; .app for Mac), and experimental results are freely available at https://github.com/Crop-Phenomics-Group/Leaf-GP/releases.

  10. Influencing Trust for Human-Automation Collaborative Scheduling of Multiple Unmanned Vehicles.

    PubMed

    Clare, Andrew S; Cummings, Mary L; Repenning, Nelson P

    2015-11-01

    We examined the impact of priming on operator trust and system performance when supervising a decentralized network of heterogeneous unmanned vehicles (UVs). Advances in autonomy have enabled a future vision of single-operator control of multiple heterogeneous UVs. Real-time scheduling for multiple UVs in uncertain environments requires the computational ability of optimization algorithms combined with the judgment and adaptability of human supervisors. Because of system and environmental uncertainty, appropriate operator trust will be instrumental to maintain high system performance and prevent cognitive overload. Three groups of operators experienced different levels of trust priming prior to conducting simulated missions in an existing, multiple-UV simulation environment. Participants who play computer and video games frequently were found to have a higher propensity to overtrust automation. By priming gamers to lower their initial trust to a more appropriate level, system performance was improved by 10% as compared to gamers who were primed to have higher trust in the automation. Priming was successful at adjusting the operator's initial and dynamic trust in the automated scheduling algorithm, which had a substantial impact on system performance. These results have important implications for personnel selection and training for futuristic multi-UV systems under human supervision. Although gamers may bring valuable skills, they may also be potentially prone to automation bias. Priming during training and regular priming throughout missions may be one potential method for overcoming this propensity to overtrust automation. © 2015, Human Factors and Ergonomics Society.

  11. Technologies for Achieving Field Ubiquitous Computing

    NASA Astrophysics Data System (ADS)

    Nagashima, Akira

    Although the term “ubiquitous” may sound like jargon used in information appliances, ubiquitous computing is an emerging concept in industrial automation. This paper presents the author's visions of field ubiquitous computing, which is based on the novel Internet Protocol IPv6. IPv6-based instrumentation will realize the next generation manufacturing excellence. This paper focuses on the following five key issues: 1. IPv6 standardization; 2. IPv6 interfaces embedded in field devices; 3. Compatibility with FOUNDATION fieldbus; 4. Network securities for field applications; and 5. Wireless technologies to complement IP instrumentation. Furthermore, the principles of digital plant operations and ubiquitous production to support the above key technologies to achieve field ubiquitous systems are discussed.

  12. Automated intelligent video surveillance system for ships

    NASA Astrophysics Data System (ADS)

    Wei, Hai; Nguyen, Hieu; Ramu, Prakash; Raju, Chaitanya; Liu, Xiaoqing; Yadegar, Jacob

    2009-05-01

    To protect naval and commercial ships from attack by terrorists and pirates, it is important to have automatic surveillance systems able to detect, identify, track and alert the crew on small watercrafts that might pursue malicious intentions, while ruling out non-threat entities. Radar systems have limitations on the minimum detectable range and lack high-level classification power. In this paper, we present an innovative Automated Intelligent Video Surveillance System for Ships (AIVS3) as a vision-based solution for ship security. Capitalizing on advanced computer vision algorithms and practical machine learning methodologies, the developed AIVS3 is not only capable of efficiently and robustly detecting, classifying, and tracking various maritime targets, but also able to fuse heterogeneous target information to interpret scene activities, associate targets with levels of threat, and issue the corresponding alerts/recommendations to the man-in- the-loop (MITL). AIVS3 has been tested in various maritime scenarios and shown accurate and effective threat detection performance. By reducing the reliance on human eyes to monitor cluttered scenes, AIVS3 will save the manpower while increasing the accuracy in detection and identification of asymmetric attacks for ship protection.

  13. Semi-automated CCTV surveillance: the effects of system confidence, system accuracy and task complexity on operator vigilance, reliance and workload.

    PubMed

    Dadashi, N; Stedmon, A W; Pridmore, T P

    2013-09-01

    Recent advances in computer vision technology have lead to the development of various automatic surveillance systems, however their effectiveness is adversely affected by many factors and they are not completely reliable. This study investigated the potential of a semi-automated surveillance system to reduce CCTV operator workload in both detection and tracking activities. A further focus of interest was the degree of user reliance on the automated system. A simulated prototype was developed which mimicked an automated system that provided different levels of system confidence information. Dependent variable measures were taken for secondary task performance, reliance and subjective workload. When the automatic component of a semi-automatic CCTV surveillance system provided reliable system confidence information to operators, workload significantly decreased and spare mental capacity significantly increased. Providing feedback about system confidence and accuracy appears to be one important way of making the status of the automated component of the surveillance system more 'visible' to users and hence more effective to use. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  14. Computer Vision Research and its Applications to Automated Cartography

    DTIC Science & Technology

    1985-09-01

    D Scene Geometry Thomas M. Strat and Martin A. Fischler Appendix D A New Sense for Depth of Field Alex P. Pentland iv 9.* qb CONTENTS (cont’d...D modeling. A. Baseline Stereo System As a framework for integration and evaluation of our research in modeling * 3-D scene geometry , as well as a...B. New Methods for Stereo Compilation As we previously indicated, the conventional approach to recovering scene geometry from a stereo pair of

  15. Construction, implementation and testing of an image identification system using computer vision methods for fruit flies with economic importance (Diptera: Tephritidae).

    PubMed

    Wang, Jiang-Ning; Chen, Xiao-Lin; Hou, Xin-Wen; Zhou, Li-Bing; Zhu, Chao-Dong; Ji, Li-Qiang

    2017-07-01

    Many species of Tephritidae are damaging to fruit, which might negatively impact international fruit trade. Automatic or semi-automatic identification of fruit flies are greatly needed for diagnosing causes of damage and quarantine protocols for economically relevant insects. A fruit fly image identification system named AFIS1.0 has been developed using 74 species belonging to six genera, which include the majority of pests in the Tephritidae. The system combines automated image identification and manual verification, balancing operability and accuracy. AFIS1.0 integrates image analysis and expert system into a content-based image retrieval framework. In the the automatic identification module, AFIS1.0 gives candidate identification results. Afterwards users can do manual selection based on comparing unidentified images with a subset of images corresponding to the automatic identification result. The system uses Gabor surface features in automated identification and yielded an overall classification success rate of 87% to the species level by Independent Multi-part Image Automatic Identification Test. The system is useful for users with or without specific expertise on Tephritidae in the task of rapid and effective identification of fruit flies. It makes the application of computer vision technology to fruit fly recognition much closer to production level. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  16. Machine vision for various manipulation tasks

    NASA Astrophysics Data System (ADS)

    Domae, Yukiyasu

    2017-03-01

    Bin-picking, re-grasping, pick-and-place, kitting, etc. There are many manipulation tasks in the fields of automation of factory, warehouse and so on. The main problem of the automation is that the target objects (items/parts) have various shapes, weights and surface materials. In my talk, I will show latest machine vision systems and algorithms against the problem.

  17. Fuzzy logic control of an AGV

    NASA Astrophysics Data System (ADS)

    Kelkar, Nikhal; Samu, Tayib; Hall, Ernest L.

    1997-09-01

    Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of a modular autonomous mobile robot controller. The controller incorporates a fuzzy logic approach for steering and speed control, a neuro-fuzzy approach for ultrasound sensing (not discussed in this paper) and an overall expert system. The advantages of a modular system are related to portability and transportability, i.e. any vehicle can become autonomous with minimal modifications. A mobile robot test-bed has been constructed using a golf cart base. This cart has full speed control with guidance provided by a vision system and obstacle avoidance using ultrasonic sensors. The speed and steering fuzzy logic controller is supervised by a 486 computer through a multi-axis motion controller. The obstacle avoidance system is based on a micro-controller interfaced with six ultrasonic transducers. This micro- controller independently handles all timing and distance calculations and sends a steering angle correction back to the computer via the serial line. This design yields a portable independent system in which high speed computer communication is not necessary. Vision guidance is accomplished with a CCD camera with a zoom lens. The data is collected by a vision tracking device that transmits the X, Y coordinates of the lane marker to the control computer. Simulation and testing of these systems yielded promising results. This design, in its modularity, creates a portable autonomous fuzzy logic controller applicable to any mobile vehicle with only minor adaptations.

  18. Results of the 2016 International Skin Imaging Collaboration International Symposium on Biomedical Imaging challenge: Comparison of the accuracy of computer algorithms to dermatologists for the diagnosis of melanoma from dermoscopic images.

    PubMed

    Marchetti, Michael A; Codella, Noel C F; Dusza, Stephen W; Gutman, David A; Helba, Brian; Kalloo, Aadi; Mishra, Nabin; Carrera, Cristina; Celebi, M Emre; DeFazio, Jennifer L; Jaimes, Natalia; Marghoob, Ashfaq A; Quigley, Elizabeth; Scope, Alon; Yélamos, Oriol; Halpern, Allan C

    2018-02-01

    Computer vision may aid in melanoma detection. We sought to compare melanoma diagnostic accuracy of computer algorithms to dermatologists using dermoscopic images. We conducted a cross-sectional study using 100 randomly selected dermoscopic images (50 melanomas, 44 nevi, and 6 lentigines) from an international computer vision melanoma challenge dataset (n = 379), along with individual algorithm results from 25 teams. We used 5 methods (nonlearned and machine learning) to combine individual automated predictions into "fusion" algorithms. In a companion study, 8 dermatologists classified the lesions in the 100 images as either benign or malignant. The average sensitivity and specificity of dermatologists in classification was 82% and 59%. At 82% sensitivity, dermatologist specificity was similar to the top challenge algorithm (59% vs. 62%, P = .68) but lower than the best-performing fusion algorithm (59% vs. 76%, P = .02). Receiver operating characteristic area of the top fusion algorithm was greater than the mean receiver operating characteristic area of dermatologists (0.86 vs. 0.71, P = .001). The dataset lacked the full spectrum of skin lesions encountered in clinical practice, particularly banal lesions. Readers and algorithms were not provided clinical data (eg, age or lesion history/symptoms). Results obtained using our study design cannot be extrapolated to clinical practice. Deep learning computer vision systems classified melanoma dermoscopy images with accuracy that exceeded some but not all dermatologists. Copyright © 2017 American Academy of Dermatology, Inc. Published by Elsevier Inc. All rights reserved.

  19. Remote-controlled vision-guided mobile robot system

    NASA Astrophysics Data System (ADS)

    Ande, Raymond; Samu, Tayib; Hall, Ernest L.

    1997-09-01

    Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of the remote controlled emergency stop and vision systems for an autonomous mobile robot. The remote control provides human supervision and emergency stop capabilities for the autonomous vehicle. The vision guidance provides automatic operation. A mobile robot test-bed has been constructed using a golf cart base. The mobile robot (Bearcat) was built for the Association for Unmanned Vehicle Systems (AUVS) 1997 competition. The mobile robot has full speed control with guidance provided by a vision system and an obstacle avoidance system using ultrasonic sensors systems. Vision guidance is accomplished using two CCD cameras with zoom lenses. The vision data is processed by a high speed tracking device, communicating with the computer the X, Y coordinates of blobs along the lane markers. The system also has three emergency stop switches and a remote controlled emergency stop switch that can disable the traction motor and set the brake. Testing of these systems has been done in the lab as well as on an outside test track with positive results that show that at five mph the vehicle can follow a line and at the same time avoid obstacles.

  20. [Advances in automatic detection technology for images of thin blood film of malaria parasite].

    PubMed

    Juan-Sheng, Zhang; Di-Qiang, Zhang; Wei, Wang; Xiao-Guang, Wei; Zeng-Guo, Wang

    2017-05-05

    This paper reviews the computer vision and image analysis studies aiming at automated diagnosis or screening of malaria in microscope images of thin blood film smears. On the basis of introducing the background and significance of automatic detection technology, the existing detection technologies are summarized and divided into several steps, including image acquisition, pre-processing, morphological analysis, segmentation, count, and pattern classification components. Then, the principles and implementation methods of each step are given in detail. In addition, the promotion and application in automatic detection technology of thick blood film smears are put forwarded as questions worthy of study, and a perspective of the future work for realization of automated microscopy diagnosis of malaria is provided.

  1. Gearing up to the factory of the future

    NASA Astrophysics Data System (ADS)

    Godfrey, D. E.

    1985-01-01

    The features of factories and manufacturing techniques and tools of the near future are discussed. The spur to incorporate new technologies on the factory floor will originate in management, who must guide the interfacing of computer-enhanced equipment with traditional manpower, materials and machines. Electronic control with responsiveness and flexibility will be the key concept in an integrated approach to processing materials. Microprocessor controlled laser and fluid cutters add accuracy to cutting operations. Unattended operation will become feasible when automated inspection is added to a work station through developments in robot vision. Optimum shop management will be achieved through AI programming of parts manufacturing, optimized work flows, and cost accounting. The automation enhancements will allow designers to affect directly parts being produced on the factory floor.

  2. Inspection of wear particles in oils by using a fuzzy classifier

    NASA Astrophysics Data System (ADS)

    Hamalainen, Jari J.; Enwald, Petri

    1994-11-01

    The reliability of stand-alone machines and larger production units can be improved by automated condition monitoring. Analysis of wear particles in lubricating or hydraulic oils helps diagnosing the wear states of machine parts. This paper presents a computer vision system for automated classification of wear particles. Digitized images from experiments with a bearing test bench, a hydraulic system with an industrial company, and oil samples from different industrial sources were used for algorithm development and testing. The wear particles were divided into four classes indicating different wear mechanisms: cutting wear, fatigue wear, adhesive wear, and abrasive wear. The results showed that the fuzzy K-nearest neighbor classifier utilized gave the same distribution of wear particles as the classification by a human expert.

  3. Heliophysics Data Environment: What's next? (Invited)

    NASA Astrophysics Data System (ADS)

    Martens, P.

    2010-12-01

    In the last two decades the Heliophysics community has witnessed the societal recognition of the importance of space weather and space climate for our technology and ecology, resulting in a renewed priority for and investment in Heliophysics. As a result of that and the explosive development of information technology, Heliophysics has experienced an exponential growth in the amount and variety of data acquired, as well as the easy electronic storage and distribution of these data. The Heliophysics community has responded well to these challenges. The first, most obvious and most needed response, was the development of Virtual Heliophysics Observatories. While the VxOs of Heliophysics still need a lot of work with respect to the expansion of search options and interoperability, I believe the basic structures and functionalities have been established, and that they meet the needs of the community. In the future we'll see a refinement, completion, and integration of VxOs, not a fundamentally different approach -- in my opinion. The challenge posed by the huge increase in amount of data is not met by VxOs alone. No individual scientist or group, even with the assistance of tons of graduate students, can analyze the torrent of data currently coming down from the fleet of heliospheric observatories. Once more information technology provides an opportunity: Automated feature recognition of solar imagery is feasible, has been implemented in a number of instances, and is strongly supported by NASA. For example, the SDO Feature Finding Team is developing a suite of 16 feature recognition modules for SDO imagery that operates in near-real time, produces space-weather warnings, and populates on-line event catalogs. Automated feature recognition -- "computer vision" -- not only save enormous amounts of time in the analysis of events, it also allows for a shift from the analysis of single events to that of sets of features and events -- the latter being by far the most important implication of computer vision. Consider some specific examples of possibilities here: From the on-line SDO metadata a user can produce with a few IDL line commands information that previously would have taken years to compile, e.g.: - Draw a butterfly diagram for Active Regions, - Find all filaments that coincide with sigmoids and correlate the automatically detected sigmoid handedness with filament chirality, - Correlate EUV jets with small scale flux emergence in coronal holes only, - Draw PIL maps with regions of high shear and large magnetic field gradients overlayed, to pinpoint potential flaring regions. Then correlate with actual flare occurrence. I emphasize that the access to those metadata will be provided by VxOs, and that the interplay between computer vision codes and data will be facilitated by VxOs. My vision for the near and medium future for the VxOs is then to provide a simple and seamless interface between data, cataloged metadata, and computer vision software, either existing or newly developed by the user. Heliospheric virtual observatories and computer vision systems will work together to constantly monitor the Sun, provide space weather warnings, populate catalogs of metadata, analyze trends, and produce real-time on-line imagery of current events.

  4. Artificial Intelligence Techniques for Automatic Screening of Amblyogenic Factors

    PubMed Central

    Van Eenwyk, Jonathan; Agah, Arvin; Giangiacomo, Joseph; Cibis, Gerhard

    2008-01-01

    Purpose To develop a low-cost automated video system to effectively screen children aged 6 months to 6 years for amblyogenic factors. Methods In 1994 one of the authors (G.C.) described video vision development assessment, a digitizable analog video-based system combining Brückner pupil red reflex imaging and eccentric photorefraction to screen young children for amblyogenic factors. The images were analyzed manually with this system. We automated the capture of digital video frames and pupil images and applied computer vision and artificial intelligence to analyze and interpret results. The artificial intelligence systems were evaluated by a tenfold testing method. Results The best system was the decision tree learning approach, which had an accuracy of 77%, compared to the “gold standard” specialist examination with a “refer/do not refer” decision. Criteria for referral were strabismus, including microtropia, and refractive errors and anisometropia considered to be amblyogenic. Eighty-two percent of strabismic individuals were correctly identified. High refractive errors were also correctly identified and referred 90% of the time, as well as significant anisometropia. The program was less correct in identifying more moderate refractive errors, below +5 and less than −7. Conclusions Although we are pursuing a variety of avenues to improve the accuracy of the automated analysis, the program in its present form provides acceptable cost benefits for detecting ambylogenic factors in children aged 6 months to 6 years. PMID:19277222

  5. Artificial intelligence techniques for automatic screening of amblyogenic factors.

    PubMed

    Van Eenwyk, Jonathan; Agah, Arvin; Giangiacomo, Joseph; Cibis, Gerhard

    2008-01-01

    To develop a low-cost automated video system to effectively screen children aged 6 months to 6 years for amblyogenic factors. In 1994 one of the authors (G.C.) described video vision development assessment, a digitizable analog video-based system combining Brückner pupil red reflex imaging and eccentric photorefraction to screen young children for amblyogenic factors. The images were analyzed manually with this system. We automated the capture of digital video frames and pupil images and applied computer vision and artificial intelligence to analyze and interpret results. The artificial intelligence systems were evaluated by a tenfold testing method. The best system was the decision tree learning approach, which had an accuracy of 77%, compared to the "gold standard" specialist examination with a "refer/do not refer" decision. Criteria for referral were strabismus, including microtropia, and refractive errors and anisometropia considered to be amblyogenic. Eighty-two percent of strabismic individuals were correctly identified. High refractive errors were also correctly identified and referred 90% of the time, as well as significant anisometropia. The program was less correct in identifying more moderate refractive errors, below +5 and less than -7. Although we are pursuing a variety of avenues to improve the accuracy of the automated analysis, the program in its present form provides acceptable cost benefits for detecting ambylogenic factors in children aged 6 months to 6 years.

  6. Scalable Photogrammetric Motion Capture System "mosca": Development and Application

    NASA Astrophysics Data System (ADS)

    Knyaz, V. A.

    2015-05-01

    Wide variety of applications (from industrial to entertainment) has a need for reliable and accurate 3D information about motion of an object and its parts. Very often the process of movement is rather fast as in cases of vehicle movement, sport biomechanics, animation of cartoon characters. Motion capture systems based on different physical principles are used for these purposes. The great potential for obtaining high accuracy and high degree of automation has vision-based system due to progress in image processing and analysis. Scalable inexpensive motion capture system is developed as a convenient and flexible tool for solving various tasks requiring 3D motion analysis. It is based on photogrammetric techniques of 3D measurements and provides high speed image acquisition, high accuracy of 3D measurements and highly automated processing of captured data. Depending on the application the system can be easily modified for different working areas from 100 mm to 10 m. The developed motion capture system uses from 2 to 4 technical vision cameras for video sequences of object motion acquisition. All cameras work in synchronization mode at frame rate up to 100 frames per second under the control of personal computer providing the possibility for accurate calculation of 3D coordinates of interest points. The system was used for a set of different applications fields and demonstrated high accuracy and high level of automation.

  7. Research and applications: Artificial intelligence

    NASA Technical Reports Server (NTRS)

    Raphael, B.; Duda, R. O.; Fikes, R. E.; Hart, P. E.; Nilsson, N. J.; Thorndyke, P. W.; Wilber, B. M.

    1971-01-01

    Research in the field of artificial intelligence is discussed. The focus of recent work has been the design, implementation, and integration of a completely new system for the control of a robot that plans, learns, and carries out tasks autonomously in a real laboratory environment. The computer implementation of low-level and intermediate-level actions; routines for automated vision; and the planning, generalization, and execution mechanisms are reported. A scenario that demonstrates the approximate capabilities of the current version of the entire robot system is presented.

  8. Terabytes to Megabytes: Data Reduction Onsite for Remote Limited Bandwidth Systems

    NASA Astrophysics Data System (ADS)

    Hirsch, M.

    2016-12-01

    Inexpensive, battery-powerable embedded computer systems such as the Intel Edison and Raspberry Pi have inspired makers of all ages to create and deploy sensor systems. Geoscientists are also leveraging such inexpensive embedded computers for solar-powered or other low-resource utilization systems for ionospheric observation. We have developed OpenCV-based machine vision algorithms to reduce terabytes per night of high-speed aurora video data down to megabytes of data to aid in automated sifting and retention of high-value data from the mountains of less interesting data. Given prohibitively expensive data connections in many parts of the world, such techniques may be generalizable to more than just the auroral video and passive FM radar implemented so far. After the automated algorithm decides which data to keep, automated upload and distribution techniques are relevant to avoid excessive delay and consumption of researcher time. Open-source collaborative software development enables data audiences from experts through citizen enthusiasts to access the data and make exciting plots. Open software and data aids in cross-disciplinary collaboration opportunities, STEM outreach and increasing public awareness of the contributions each geoscience data collection system makes.

  9. Vision requirements for Space Station applications

    NASA Technical Reports Server (NTRS)

    Crouse, K. R.

    1985-01-01

    Problems which will be encountered by computer vision systems in Space Station operations are discussed, along with solutions be examined at Johnson Space Station. Lighting cannot be controlled in space, nor can the random presence of reflective surfaces. Task-oriented capabilities are to include docking to moving objects, identification of unexpected objects during autonomous flights to different orbits, and diagnoses of damage and repair requirements for autonomous Space Station inspection robots. The approaches being examined to provide these and other capabilities are television IR sensors, advanced pattern recognition programs feeding on data from laser probes, laser radar for robot eyesight and arrays of SMART sensors for automated location and tracking of target objects. Attention is also being given to liquid crystal light valves for optical processing of images for comparisons with on-board electronic libraries of images.

  10. Performance of Color Camera Machine Vision in Automated Furniture Rough Mill Systems

    Treesearch

    D. Earl Kline; Agus Widoyoko; Janice K. Wiedenbeck; Philip A. Araman

    1998-01-01

    The objective of this study was to evaluate the performance of color camera machine vision for lumber processing in a furniture rough mill. The study used 134 red oak boards to compare the performance of automated gang-rip-first rough mill yield based on a prototype color camera lumber inspection system developed at Virginia Tech with both estimated optimum rough mill...

  11. Pre-operative segmentation of neck CT datasets for the planning of neck dissections

    NASA Astrophysics Data System (ADS)

    Cordes, Jeanette; Dornheim, Jana; Preim, Bernhard; Hertel, Ilka; Strauss, Gero

    2006-03-01

    For the pre-operative segmentation of CT neck datasets, we developed the software assistant NeckVision. The relevant anatomical structures for neck dissection planning can be segmented and the resulting patient-specific 3D-models are visualized afterwards in another software system for intervention planning. As a first step, we examined the appropriateness of elementary segmentation techniques based on gray values and contour information to extract the structures in the neck region from CT data. Region growing, interactive watershed transformation and live-wire are employed for segmentation of different target structures. It is also examined, which of the segmentation tasks can be automated. Based on this analysis, the software assistant NeckVision was developed to optimally support the workflow of image analysis for clinicians. The usability of NeckVision was tested within a first evaluation with four otorhinolaryngologists from the university hospital of Leipzig, four computer scientists from the university of Magdeburg and two laymen in both fields.

  12. Automated high-grade prostate cancer detection and ranking on whole slide images

    NASA Astrophysics Data System (ADS)

    Huang, Chao-Hui; Racoceanu, Daniel

    2017-03-01

    Recently, digital pathology (DP) has been largely improved due to the development of computer vision and machine learning. Automated detection of high-grade prostate carcinoma (HG-PCa) is an impactful medical use-case showing the paradigm of collaboration between DP and computer science: given a field of view (FOV) from a whole slide image (WSI), the computer-aided system is able to determine the grade by classifying the FOV. Various approaches have been reported based on this approach. However, there are two reasons supporting us to conduct this work: first, there is still room for improvement in terms of detection accuracy of HG-PCa; second, a clinical practice is more complex than the operation of simple image classification. FOV ranking is also an essential step. E.g., in clinical practice, a pathologist usually evaluates a case based on a few FOVs from the given WSI. Then, makes decision based on the most severe FOV. This important ranking scenario is not yet being well discussed. In this work, we introduce an automated detection and ranking system for PCa based on Gleason pattern discrimination. Our experiments suggested that the proposed system is able to perform high-accuracy detection ( 95:57% +/- 2:1%) and excellent performance of ranking. Hence, the proposed system has a great potential to support the daily tasks in the medical routine of clinical pathology.

  13. MIT-NASA Workshop: Transformational Technologies

    NASA Technical Reports Server (NTRS)

    Mankins, J. C. (Editor); Christensen, C. B.; Gresham, E. C.; Simmons, A.; Mullins, C. A.

    2005-01-01

    As a space faring nation, we are at a critical juncture in the evolution of space exploration. NASA has announced its Vision for Space Exploration, a vision of returning humans to the Moon, sending robots and eventually humans to Mars, and exploring the outer solar system via automated spacecraft. However, mission concepts have become increasingly complex, with the potential to yield a wealth of scientific knowledge. Meanwhile, there are significant resource challenges to be met. Launch costs remain a barrier to routine space flight; the ever-changing fiscal and political environments can wreak havoc on mission planning; and technologies are constantly improving, and systems that were state of the art when a program began can quickly become outmoded before a mission is even launched. This Conference Publication describes the workshop and featured presentations by world-class experts presenting leading-edge technologies and applications in the areas of power and propulsion; communications; automation, robotics, computing, and intelligent systems; and transformational techniques for space activities. Workshops such as this one provide an excellent medium for capturing the broadest possible array of insights and expertise, learning from researchers in universities, national laboratories, NASA field Centers, and industry to help better our future in space.

  14. Automated measurement of mouse social behaviors using depth sensing, video tracking, and machine learning.

    PubMed

    Hong, Weizhe; Kennedy, Ann; Burgos-Artizzu, Xavier P; Zelikowsky, Moriel; Navonne, Santiago G; Perona, Pietro; Anderson, David J

    2015-09-22

    A lack of automated, quantitative, and accurate assessment of social behaviors in mammalian animal models has limited progress toward understanding mechanisms underlying social interactions and their disorders such as autism. Here we present a new integrated hardware and software system that combines video tracking, depth sensing, and machine learning for automatic detection and quantification of social behaviors involving close and dynamic interactions between two mice of different coat colors in their home cage. We designed a hardware setup that integrates traditional video cameras with a depth camera, developed computer vision tools to extract the body "pose" of individual animals in a social context, and used a supervised learning algorithm to classify several well-described social behaviors. We validated the robustness of the automated classifiers in various experimental settings and used them to examine how genetic background, such as that of Black and Tan Brachyury (BTBR) mice (a previously reported autism model), influences social behavior. Our integrated approach allows for rapid, automated measurement of social behaviors across diverse experimental designs and also affords the ability to develop new, objective behavioral metrics.

  15. Automated measurement of mouse social behaviors using depth sensing, video tracking, and machine learning

    PubMed Central

    Hong, Weizhe; Kennedy, Ann; Burgos-Artizzu, Xavier P.; Zelikowsky, Moriel; Navonne, Santiago G.; Perona, Pietro; Anderson, David J.

    2015-01-01

    A lack of automated, quantitative, and accurate assessment of social behaviors in mammalian animal models has limited progress toward understanding mechanisms underlying social interactions and their disorders such as autism. Here we present a new integrated hardware and software system that combines video tracking, depth sensing, and machine learning for automatic detection and quantification of social behaviors involving close and dynamic interactions between two mice of different coat colors in their home cage. We designed a hardware setup that integrates traditional video cameras with a depth camera, developed computer vision tools to extract the body “pose” of individual animals in a social context, and used a supervised learning algorithm to classify several well-described social behaviors. We validated the robustness of the automated classifiers in various experimental settings and used them to examine how genetic background, such as that of Black and Tan Brachyury (BTBR) mice (a previously reported autism model), influences social behavior. Our integrated approach allows for rapid, automated measurement of social behaviors across diverse experimental designs and also affords the ability to develop new, objective behavioral metrics. PMID:26354123

  16. Integration of autopatching with automated pipette and cell detection in vitro

    PubMed Central

    Wu (吴秋雨), Qiuyu; Kolb, Ilya; Callahan, Brendan M.; Su, Zhaolun; Stoy, William; Kodandaramaiah, Suhasa B.; Neve, Rachael; Zeng, Hongkui; Boyden, Edward S.; Forest, Craig R.

    2016-01-01

    Patch clamp is the main technique for measuring electrical properties of individual cells. Since its discovery in 1976 by Neher and Sakmann, patch clamp has been instrumental in broadening our understanding of the fundamental properties of ion channels and synapses in neurons. The conventional patch-clamp method requires manual, precise positioning of a glass micropipette against the cell membrane of a visually identified target neuron. Subsequently, a tight “gigaseal” connection between the pipette and the cell membrane is established, and suction is applied to establish the whole cell patch configuration to perform electrophysiological recordings. This procedure is repeated manually for each individual cell, making it labor intensive and time consuming. In this article we describe the development of a new automatic patch-clamp system for brain slices, which integrates all steps of the patch-clamp process: image acquisition through a microscope, computer vision-based identification of a patch pipette and fluorescently labeled neurons, micromanipulator control, and automated patching. We validated our system in brain slices from wild-type and transgenic mice expressing channelrhodopsin 2 under the Thy1 promoter (line 18) or injected with a herpes simplex virus-expressing archaerhodopsin, ArchT. Our computer vision-based algorithm makes the fluorescent cell detection and targeting user independent. Compared with manual patching, our system is superior in both success rate and average trial duration. It provides more reliable trial-to-trial control of the patching process and improves reproducibility of experiments. PMID:27385800

  17. Performance Monitoring Of A Computer Numerically Controlled (CNC) Lathe Using Pattern Recognition Techniques

    NASA Astrophysics Data System (ADS)

    Daneshmend, L. K.; Pak, H. A.

    1984-02-01

    On-line monitoring of the cutting process in CNC lathe is desirable to ensure unattended fault-free operation in an automated environment. The state of the cutting tool is one of the most important parameters which characterises the cutting process. Direct monitoring of the cutting tool or workpiece is not feasible during machining. However several variables related to the state of the tool can be measured on-line. A novel monitoring technique is presented which uses cutting torque as the variable for on-line monitoring. A classifier is designed on the basis of the empirical relationship between cutting torque and flank wear. The empirical model required by the on-line classifier is established during an automated training cycle using machine vision for off-line direct inspection of the tool.

  18. Computer vision applied to herbarium specimens of German trees: testing the future utility of the millions of herbarium specimen images for automated identification.

    PubMed

    Unger, Jakob; Merhof, Dorit; Renner, Susanne

    2016-11-16

    Global Plants, a collaborative between JSTOR and some 300 herbaria, now contains about 2.48 million high-resolution images of plant specimens, a number that continues to grow, and collections that are digitizing their specimens at high resolution are allocating considerable recourses to the maintenance of computer hardware (e.g., servers) and to acquiring digital storage space. We here apply machine learning, specifically the training of a Support-Vector-Machine, to classify specimen images into categories, ideally at the species level, using the 26 most common tree species in Germany as a test case. We designed an analysis pipeline and classification system consisting of segmentation, normalization, feature extraction, and classification steps and evaluated the system in two test sets, one with 26 species, the other with 17, in each case using 10 images per species of plants collected between 1820 and 1995, which simulates the empirical situation that most named species are represented in herbaria and databases, such as JSTOR, by few specimens. We achieved 73.21% accuracy of species assignments in the larger test set, and 84.88% in the smaller test set. The results of this first application of a computer vision algorithm trained on images of herbarium specimens shows that despite the problem of overlapping leaves, leaf-architectural features can be used to categorize specimens to species with good accuracy. Computer vision is poised to play a significant role in future rapid identification at least for frequently collected genera or species in the European flora.

  19. Vision Based Localization in Urban Environments

    NASA Technical Reports Server (NTRS)

    McHenry, Michael; Cheng, Yang; Matthies, Larry

    2005-01-01

    As part of DARPA's MARS2020 program, the Jet Propulsion Laboratory developed a vision-based system for localization in urban environments that requires neither GPS nor active sensors. System hardware consists of a pair of small FireWire cameras and a standard Pentium-based computer. The inputs to the software system consist of: 1) a crude grid-based map describing the positions of buildings, 2) an initial estimate of robot location and 3) the video streams produced by each camera. At each step during the traverse the system: captures new image data, finds image features hypothesized to lie on the outside of a building, computes the range to those features, determines an estimate of the robot's motion since the previous step and combines that data with the map to update a probabilistic representation of the robot's location. This probabilistic representation allows the system to simultaneously represent multiple possible locations, For our testing, we have derived the a priori map manually using non-orthorectified overhead imagery, although this process could be automated. The software system consists of two primary components. The first is the vision system which uses binocular stereo ranging together with a set of heuristics to identify features likely to be part of building exteriors and to compute an estimate of the robot's motion since the previous step. The resulting visual features and the associated range measurements are software component, a particle-filter based localization system. This system uses the map and the then fed to the second primary most recent results from the vision system to update the estimate of the robot's location. This report summarizes the design of both the hardware and software and will include the results of applying the system to the global localization of a robot over an approximately half-kilometer traverse across JPL'S Pasadena campus.

  20. Computer vision-based technologies and commercial best practices for the advancement of the motion imagery tradecraft

    NASA Astrophysics Data System (ADS)

    Phipps, Marja; Capel, David; Srinivasan, James

    2014-06-01

    Motion imagery capabilities within the Department of Defense/Intelligence Community (DoD/IC) have advanced significantly over the last decade, attempting to meet continuously growing data collection, video processing and analytical demands in operationally challenging environments. The motion imagery tradecraft has evolved accordingly, enabling teams of analysts to effectively exploit data and generate intelligence reports across multiple phases in structured Full Motion Video (FMV) Processing Exploitation and Dissemination (PED) cells. Yet now the operational requirements are drastically changing. The exponential growth in motion imagery data continues, but to this the community adds multi-INT data, interoperability with existing and emerging systems, expanded data access, nontraditional users, collaboration, automation, and support for ad hoc configurations beyond the current FMV PED cells. To break from the legacy system lifecycle, we look towards a technology application and commercial adoption model course which will meet these future Intelligence, Surveillance and Reconnaissance (ISR) challenges. In this paper, we explore the application of cutting edge computer vision technology to meet existing FMV PED shortfalls and address future capability gaps. For example, real-time georegistration services developed from computer-vision-based feature tracking, multiple-view geometry, and statistical methods allow the fusion of motion imagery with other georeferenced information sources - providing unparalleled situational awareness. We then describe how these motion imagery capabilities may be readily deployed in a dynamically integrated analytical environment; employing an extensible framework, leveraging scalable enterprise-wide infrastructure and following commercial best practices.

  1. CAFE: Computer aided fabric evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sims, J.E.

    1994-05-06

    With the intent of automating the inspection of color printed fabrics for defects, the Engineering Research Division of the Lawrence Livermore National Laboratory in addition with several other national labs, in conjunction with the textile industry has initiated the CAFE project. The projects objective is predicated on the development, implementation and testing of an algorithm for the inspection of color printed fabrics. We attempt to take advantage of the wide ranging applications possible with Computer Vision in order to achieve this. The first job of the algorithm is to teach the computer the {open_quote}correct{close_quote} repeat as the reference, tests themore » remaining repeats in the pattern. There are two different ways to go about doing the first job and with this paper we will describe both methods.« less

  2. Vision technology/algorithms for space robotics applications

    NASA Technical Reports Server (NTRS)

    Krishen, Kumar; Defigueiredo, Rui J. P.

    1987-01-01

    The thrust of automation and robotics for space applications has been proposed for increased productivity, improved reliability, increased flexibility, higher safety, and for the performance of automating time-consuming tasks, increasing productivity/performance of crew-accomplished tasks, and performing tasks beyond the capability of the crew. This paper provides a review of efforts currently in progress in the area of robotic vision. Both systems and algorithms are discussed. The evolution of future vision/sensing is projected to include the fusion of multisensors ranging from microwave to optical with multimode capability to include position, attitude, recognition, and motion parameters. The key feature of the overall system design will be small size and weight, fast signal processing, robust algorithms, and accurate parameter determination. These aspects of vision/sensing are also discussed.

  3. CAD-model-based vision for space applications

    NASA Technical Reports Server (NTRS)

    Shapiro, Linda G.

    1988-01-01

    A pose acquisition system operating in space must be able to perform well in a variety of different applications including automated guidance and inspections tasks with many different, but known objects. Since the space station is being designed with automation in mind, there will be CAD models of all the objects, including the station itself. The construction of vision models and procedures directly from the CAD models is the goal of this project. The system that is being designed and implementing must convert CAD models to vision models, predict visible features from a given view point from the vision models, construct view classes representing views of the objects, and use the view class model thus derived to rapidly determine the pose of the object from single images and/or stereo pairs.

  4. Automated Low-Cost Smartphone-Based Lateral Flow Saliva Test Reader for Drugs-of-Abuse Detection.

    PubMed

    Carrio, Adrian; Sampedro, Carlos; Sanchez-Lopez, Jose Luis; Pimienta, Miguel; Campoy, Pascual

    2015-11-24

    Lateral flow assay tests are nowadays becoming powerful, low-cost diagnostic tools. Obtaining a result is usually subject to visual interpretation of colored areas on the test by a human operator, introducing subjectivity and the possibility of errors in the extraction of the results. While automated test readers providing a result-consistent solution are widely available, they usually lack portability. In this paper, we present a smartphone-based automated reader for drug-of-abuse lateral flow assay tests, consisting of an inexpensive light box and a smartphone device. Test images captured with the smartphone camera are processed in the device using computer vision and machine learning techniques to perform automatic extraction of the results. A deep validation of the system has been carried out showing the high accuracy of the system. The proposed approach, applicable to any line-based or color-based lateral flow test in the market, effectively reduces the manufacturing costs of the reader and makes it portable and massively available while providing accurate, reliable results.

  5. Automated benthic counting of living and non-living components in Ngedarrak Reef, Palau via subsurface underwater video.

    PubMed

    Marcos, Ma Shiela Angeli; David, Laura; Peñaflor, Eileen; Ticzon, Victor; Soriano, Maricor

    2008-10-01

    We introduce an automated benthic counting system in application for rapid reef assessment that utilizes computer vision on subsurface underwater reef video. Video acquisition was executed by lowering a submersible bullet-type camera from a motor boat while moving across the reef area. A GPS and echo sounder were linked to the video recorder to record bathymetry and location points. Analysis of living and non-living components was implemented through image color and texture feature extraction from the reef video frames and classification via Linear Discriminant Analysis. Compared to common rapid reef assessment protocols, our system can perform fine scale data acquisition and processing in one day. Reef video was acquired in Ngedarrak Reef, Koror, Republic of Palau. Overall success performance ranges from 60% to 77% for depths of 1 to 3 m. The development of an automated rapid reef classification system is most promising for reef studies that need fast and frequent data acquisition of percent cover of living and nonliving components.

  6. Application of the Modular Automated Reconfigurable Assembly System (MARAS) concept to adaptable vision gauging and parts feeding

    NASA Technical Reports Server (NTRS)

    By, Andre Bernard; Caron, Ken; Rothenberg, Michael; Sales, Vic

    1994-01-01

    This paper presents the first phase results of a collaborative effort between university researchers and a flexible assembly systems integrator to implement a comprehensive modular approach to flexible assembly automation. This approach, named MARAS (Modular Automated Reconfigurable Assembly System), has been structured to support multiple levels of modularity in terms of both physical components and system control functions. The initial focus of the MARAS development has been on parts gauging and feeding operations for cylinder lock assembly. This phase is nearing completion and has resulted in the development of a highly configurable system for vision gauging functions on a wide range of small components (2 mm to 100 mm in size). The reconfigurable concepts implemented in this adaptive Vision Gauging Module (VGM) are now being extended to applicable aspects of the singulating, selecting, and orienting functions required for the flexible feeding of similar mechanical components and assemblies.

  7. Automated Grading System for Evaluation of Superficial Punctate Keratitis Associated With Dry Eye.

    PubMed

    Rodriguez, John D; Lane, Keith J; Ousler, George W; Angjeli, Endri; Smith, Lisa M; Abelson, Mark B

    2015-04-01

    To develop an automated method of grading fluorescein staining that accurately reproduces the clinical grading system currently in use. From the slit lamp photograph of the fluorescein-stained cornea, the region of interest was selected and punctate dot number calculated using software developed with the OpenCV computer vision library. Images (n = 229) were then divided into six incremental severity categories based on computed scores. The final selection of 54 photographs represented the full range of scores: nine images from each of six categories. These were then evaluated by three investigators using a clinical 0 to 4 corneal staining scale. Pearson correlations were calculated to compare investigator scores, and mean investigator and automated scores. Lin's Concordance Correlation Coefficients (CCC) and Bland-Altman plots were used to assess agreement between methods and between investigators. Pearson's correlation between investigators was 0.914; mean CCC between investigators was 0.882. Bland-Altman analysis indicated that scores assessed by investigator 3 were significantly higher than those of investigators 1 and 2 (paired t-test). The predicted grade was calculated to be: Gpred = 1.48log(Ndots) - 0.206. The two-point Pearson's correlation coefficient between the methods was 0.927 (P < 0.0001). The CCC between predicted automated score Gpred and mean investigator score was 0.929, 95% confidence interval (0.884-0.957). Bland-Altman analysis did not indicate bias. The difference in SD between clinical and automated methods was 0.398. An objective, automated analysis of corneal staining provides a quality assurance tool to be used to substantiate clinical grading of key corneal staining endpoints in multicentered clinical trials of dry eye.

  8. Camera-enabled techniques for organic synthesis

    PubMed Central

    Ingham, Richard J; O’Brien, Matthew; Browne, Duncan L

    2013-01-01

    Summary A great deal of time is spent within synthetic chemistry laboratories on non-value-adding activities such as sample preparation and work-up operations, and labour intensive activities such as extended periods of continued data collection. Using digital cameras connected to computer vision algorithms, camera-enabled apparatus can perform some of these processes in an automated fashion, allowing skilled chemists to spend their time more productively. In this review we describe recent advances in this field of chemical synthesis and discuss how they will lead to advanced synthesis laboratories of the future. PMID:23766820

  9. Optical coherence tomography and computer-aided diagnosis of a murine model of chronic kidney disease

    NASA Astrophysics Data System (ADS)

    Wang, Bohan; Wang, Hsing-Wen; Guo, Hengchang; Anderson, Erik; Tang, Qinggong; Wu, Tongtong; Falola, Reuben; Smith, Tikina; Andrews, Peter M.; Chen, Yu

    2017-12-01

    Chronic kidney disease (CKD) is characterized by a progressive loss of renal function over time. Histopathological analysis of the condition of glomeruli and the proximal convolutional tubules over time can provide valuable insights into the progression of CKD. Optical coherence tomography (OCT) is a technology that can analyze the microscopic structures of a kidney in a nondestructive manner. Recently, we have shown that OCT can provide real-time imaging of kidney microstructures in vivo without administering exogenous contrast agents. A murine model of CKD induced by intravenous Adriamycin (ADR) injection is evaluated by OCT. OCT images of the rat kidneys have been captured every week up to eight weeks. Tubular diameter and hypertrophic tubule population of the kidneys at multiple time points after ADR injection have been evaluated through a fully automated computer-vision system. Results revealed that mean tubular diameter and hypertrophic tubule population increase with time in post-ADR injection period. The results suggest that OCT images of the kidney contain abundant information about kidney histopathology. Fully automated computer-aided diagnosis based on OCT has the potential for clinical evaluation of CKD conditions.

  10. The Ilac-Project Supporting Ancient Coin Classification by Means of Image Analysis

    NASA Astrophysics Data System (ADS)

    Kavelar, A.; Zambanini, S.; Kampel, M.; Vondrovec, K.; Siegl, K.

    2013-07-01

    This paper presents the ILAC project, which aims at the development of an automated image-based classification system for ancient Roman Republican coins. The benefits of such a system are manifold: operating at the suture between computer vision and numismatics, ILAC can reduce the day-to-day workload of numismatists by assisting them in classification tasks and providing a preselection of suitable coin classes. This is especially helpful for large coin hoard findings comprising several thousands of coins. Furthermore, this system could be implemented in an online platform for hobby numismatists, allowing them to access background information about their coin collection by simply uploading a photo of obverse and reverse for the coin of interest. ILAC explores different computer vision techniques and their combinations for the use of image-based coin recognition. Some of these methods, such as image matching, use the entire coin image in the classification process, while symbol or legend recognition exploit certain characteristics of the coin imagery. An overview of the methods explored so far and the respective experiments is given as well as an outlook on the next steps of the project.

  11. Computer vision for driver assistance systems

    NASA Astrophysics Data System (ADS)

    Handmann, Uwe; Kalinke, Thomas; Tzomakas, Christos; Werner, Martin; von Seelen, Werner

    1998-07-01

    Systems for automated image analysis are useful for a variety of tasks and their importance is still increasing due to technological advances and an increase of social acceptance. Especially in the field of driver assistance systems the progress in science has reached a level of high performance. Fully or partly autonomously guided vehicles, particularly for road-based traffic, pose high demands on the development of reliable algorithms due to the conditions imposed by natural environments. At the Institut fur Neuroinformatik, methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We introduce a system which extracts the important information from an image taken by a CCD camera installed at the rear view mirror in a car. The approach consists of a sequential and a parallel sensor and information processing. Three main tasks namely the initial segmentation (object detection), the object tracking and the object classification are realized by integration in the sequential branch and by fusion in the parallel branch. The main gain of this approach is given by the integrative coupling of different algorithms providing partly redundant information.

  12. Real-Time 3D Visualization

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Butler Hine, former director of the Intelligent Mechanism Group (IMG) at Ames Research Center, and five others partnered to start Fourth Planet, Inc., a visualization company that specializes in the intuitive visual representation of dynamic, real-time data over the Internet and Intranet. Over a five-year period, the then NASA researchers performed ten robotic field missions in harsh climes to mimic the end- to-end operations of automated vehicles trekking across another world under control from Earth. The core software technology for these missions was the Virtual Environment Vehicle Interface (VEVI). Fourth Planet has released VEVI4, the fourth generation of the VEVI software, and NetVision. VEVI4 is a cutting-edge computer graphics simulation and remote control applications tool. The NetVision package allows large companies to view and analyze in virtual 3D space such things as the health or performance of their computer network or locate a trouble spot on an electric power grid. Other products are forthcoming. Fourth Planet is currently part of the NASA/Ames Technology Commercialization Center, a business incubator for start-up companies.

  13. BisQue: cloud-based system for management, annotation, visualization, analysis and data mining of underwater and remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Fedorov, D.; Miller, R. J.; Kvilekval, K. G.; Doheny, B.; Sampson, S.; Manjunath, B. S.

    2016-02-01

    Logistical and financial limitations of underwater operations are inherent in marine science, including biodiversity observation. Imagery is a promising way to address these challenges, but the diversity of organisms thwarts simple automated analysis. Recent developments in computer vision methods, such as convolutional neural networks (CNN), are promising for automated classification and detection tasks but are typically very computationally expensive and require extensive training on large datasets. Therefore, managing and connecting distributed computation, large storage and human annotations of diverse marine datasets is crucial for effective application of these methods. BisQue is a cloud-based system for management, annotation, visualization, analysis and data mining of underwater and remote sensing imagery and associated data. Designed to hide the complexity of distributed storage, large computational clusters, diversity of data formats and inhomogeneous computational environments behind a user friendly web-based interface, BisQue is built around an idea of flexible and hierarchical annotations defined by the user. Such textual and graphical annotations can describe captured attributes and the relationships between data elements. Annotations are powerful enough to describe cells in fluorescent 4D images, fish species in underwater videos and kelp beds in aerial imagery. Presently we are developing BisQue-based analysis modules for automated identification of benthic marine organisms. Recent experiments with drop-out and CNN based classification of several thousand annotated underwater images demonstrated an overall accuracy above 70% for the 15 best performing species and above 85% for the top 5 species. Based on these promising results, we have extended bisque with a CNN-based classification system allowing continuous training on user-provided data.

  14. Computer vision techniques for rotorcraft low altitude flight

    NASA Technical Reports Server (NTRS)

    Sridhar, Banavar

    1990-01-01

    Rotorcraft operating in high-threat environments fly close to the earth's surface to utilize surrounding terrain, vegetation, or manmade objects to minimize the risk of being detected by an enemy. Increasing levels of concealment are achieved by adopting different tactics during low-altitude flight. Rotorcraft employ three tactics during low-altitude flight: low-level, contour, and nap-of-the-earth (NOE). The key feature distinguishing the NOE mode from the other two modes is that the whole rotorcraft, including the main rotor, is below tree-top whenever possible. This leads to the use of lateral maneuvers for avoiding obstacles, which in fact constitutes the means for concealment. The piloting of the rotorcraft is at best a very demanding task and the pilot will need help from onboard automation tools in order to devote more time to mission-related activities. The development of an automation tool which has the potential to detect obstacles in the rotorcraft flight path, warn the crew, and interact with the guidance system to avoid detected obstacles, presents challenging problems. Research is described which applies techniques from computer vision to automation of rotorcraft navigtion. The effort emphasizes the development of a methodology for detecting the ranges to obstacles in the region of interest based on the maximum utilization of passive sensors. The range map derived from the obstacle-detection approach can be used as obstacle data for the obstacle avoidance in an automatic guidance system and as advisory display to the pilot. The lack of suitable flight imagery data presents a problem in the verification of concepts for obstacle detection. This problem is being addressed by the development of an adequate flight database and by preprocessing of currently available flight imagery. The presentation concludes with some comments on future work and how research in this area relates to the guidance of other autonomous vehicles.

  15. Imperceptible watermarking for security of fundus images in tele-ophthalmology applications and computer-aided diagnosis of retina diseases.

    PubMed

    Singh, Anushikha; Dutta, Malay Kishore

    2017-12-01

    The authentication and integrity verification of medical images is a critical and growing issue for patients in e-health services. Accurate identification of medical images and patient verification is an essential requirement to prevent error in medical diagnosis. The proposed work presents an imperceptible watermarking system to address the security issue of medical fundus images for tele-ophthalmology applications and computer aided automated diagnosis of retinal diseases. In the proposed work, patient identity is embedded in fundus image in singular value decomposition domain with adaptive quantization parameter to maintain perceptual transparency for variety of fundus images like healthy fundus or disease affected image. In the proposed method insertion of watermark in fundus image does not affect the automatic image processing diagnosis of retinal objects & pathologies which ensure uncompromised computer-based diagnosis associated with fundus image. Patient ID is correctly recovered from watermarked fundus image for integrity verification of fundus image at the diagnosis centre. The proposed watermarking system is tested in a comprehensive database of fundus images and results are convincing. results indicate that proposed watermarking method is imperceptible and it does not affect computer vision based automated diagnosis of retinal diseases. Correct recovery of patient ID from watermarked fundus image makes the proposed watermarking system applicable for authentication of fundus images for computer aided diagnosis and Tele-ophthalmology applications. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Vision-based obstacle recognition system for automated lawn mower robot development

    NASA Astrophysics Data System (ADS)

    Mohd Zin, Zalhan; Ibrahim, Ratnawati

    2011-06-01

    Digital image processing techniques (DIP) have been widely used in various types of application recently. Classification and recognition of a specific object using vision system require some challenging tasks in the field of image processing and artificial intelligence. The ability and efficiency of vision system to capture and process the images is very important for any intelligent system such as autonomous robot. This paper gives attention to the development of a vision system that could contribute to the development of an automated vision based lawn mower robot. The works involve on the implementation of DIP techniques to detect and recognize three different types of obstacles that usually exist on a football field. The focus was given on the study on different types and sizes of obstacles, the development of vision based obstacle recognition system and the evaluation of the system's performance. Image processing techniques such as image filtering, segmentation, enhancement and edge detection have been applied in the system. The results have shown that the developed system is able to detect and recognize various types of obstacles on a football field with recognition rate of more 80%.

  17. Understanding and preventing computer vision syndrome.

    PubMed

    Loh, Ky; Redd, Sc

    2008-01-01

    The invention of computer and advancement in information technology has revolutionized and benefited the society but at the same time has caused symptoms related to its usage such as ocular sprain, irritation, redness, dryness, blurred vision and double vision. This cluster of symptoms is known as computer vision syndrome which is characterized by the visual symptoms which result from interaction with computer display or its environment. Three major mechanisms that lead to computer vision syndrome are extraocular mechanism, accommodative mechanism and ocular surface mechanism. The visual effects of the computer such as brightness, resolution, glare and quality all are known factors that contribute to computer vision syndrome. Prevention is the most important strategy in managing computer vision syndrome. Modification in the ergonomics of the working environment, patient education and proper eye care are crucial in managing computer vision syndrome.

  18. Automated Inspection of Power Line Corridors to Measure Vegetation Undercut Using Uav-Based Images

    NASA Astrophysics Data System (ADS)

    Maurer, M.; Hofer, M.; Fraundorfer, F.; Bischof, H.

    2017-08-01

    Power line corridor inspection is a time consuming task that is performed mostly manually. As the development of UAVs made huge progress in recent years, and photogrammetric computer vision systems became well established, it is time to further automate inspection tasks. In this paper we present an automated processing pipeline to inspect vegetation undercuts of power line corridors. For this, the area of inspection is reconstructed, geo-referenced, semantically segmented and inter class distance measurements are calculated. The presented pipeline performs an automated selection of the proper 3D reconstruction method for on the one hand wiry (power line), and on the other hand solid objects (surrounding). The automated selection is realized by performing pixel-wise semantic segmentation of the input images using a Fully Convolutional Neural Network. Due to the geo-referenced semantic 3D reconstructions a documentation of areas where maintenance work has to be performed is inherently included in the distance measurements and can be extracted easily. We evaluate the influence of the semantic segmentation according to the 3D reconstruction and show that the automated semantic separation in wiry and dense objects of the 3D reconstruction routine improves the quality of the vegetation undercut inspection. We show the generalization of the semantic segmentation to datasets acquired using different acquisition routines and to varied seasons in time.

  19. Accelerating the discovery of materials for clean energy in the era of smart automation

    NASA Astrophysics Data System (ADS)

    Tabor, Daniel P.; Roch, Loïc M.; Saikin, Semion K.; Kreisbeck, Christoph; Sheberla, Dennis; Montoya, Joseph H.; Dwaraknath, Shyam; Aykol, Muratahan; Ortiz, Carlos; Tribukait, Hermann; Amador-Bedolla, Carlos; Brabec, Christoph J.; Maruyama, Benji; Persson, Kristin A.; Aspuru-Guzik, Alán

    2018-05-01

    The discovery and development of novel materials in the field of energy are essential to accelerate the transition to a low-carbon economy. Bringing recent technological innovations in automation, robotics and computer science together with current approaches in chemistry, materials synthesis and characterization will act as a catalyst for revolutionizing traditional research and development in both industry and academia. This Perspective provides a vision for an integrated artificial intelligence approach towards autonomous materials discovery, which, in our opinion, will emerge within the next 5 to 10 years. The approach we discuss requires the integration of the following tools, which have already seen substantial development to date: high-throughput virtual screening, automated synthesis planning, automated laboratories and machine learning algorithms. In addition to reducing the time to deployment of new materials by an order of magnitude, this integrated approach is expected to lower the cost associated with the initial discovery. Thus, the price of the final products (for example, solar panels, batteries and electric vehicles) will also decrease. This in turn will enable industries and governments to meet more ambitious targets in terms of reducing greenhouse gas emissions at a faster pace.

  20. A survey of camera error sources in machine vision systems

    NASA Astrophysics Data System (ADS)

    Jatko, W. B.

    In machine vision applications, such as an automated inspection line, television cameras are commonly used to record scene intensity in a computer memory or frame buffer. Scene data from the image sensor can then be analyzed with a wide variety of feature-detection techniques. Many algorithms found in textbooks on image processing make the implicit simplifying assumption of an ideal input image with clearly defined edges and uniform illumination. The ideal image model is helpful to aid the student in understanding the principles of operation, but when these algorithms are blindly applied to real-world images the results can be unsatisfactory. This paper examines some common measurement errors found in camera sensors and their underlying causes, and possible methods of error compensation. The role of the camera in a typical image-processing system is discussed, with emphasis on the origination of signal distortions. The effects of such things as lighting, optics, and sensor characteristics are considered.

  1. 75 FR 71146 - In the Matter of Certain Machine Vision Software, Machine Vision Systems, and Products Containing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-22

    ...''); Amistar Automation, Inc. (``Amistar'') of San Marcos, California; Techno Soft Systemnics, Inc. (``Techno..., the ALJ's construction of the claim terms ``test,'' ``match score surface,'' and ``gradient direction...

  2. Contour scanning of textile preforms using a light-section sensor for the automated manufacturing of fibre-reinforced plastics

    NASA Astrophysics Data System (ADS)

    Schmitt, R.; Niggemann, C.; Mersmann, C.

    2008-04-01

    Fibre-reinforced plastics (FRP) are particularly suitable for components where light-weight structures with advanced mechanical properties are required, e.g. for aerospace parts. Nevertheless, many manufacturing processes for FRP include manual production steps without an integrated quality control. A vital step in the process chain is the lay-up of the textile preform, as it greatly affects the geometry and the mechanical performance of the final part. In order to automate the FRP production, an inline machine vision system is needed for a closed-loop control of the preform lay-up. This work describes the development of a novel laser light-section sensor for optical inspection of textile preforms and its integration and validation in a machine vision prototype. The proposed method aims at the determination of the contour position of each textile layer through edge scanning. The scanning route is automatically derived by using texture analysis algorithms in a preliminary step. As sensor output a distinct stage profile is computed from the acquired greyscale image. The contour position is determined with sub-pixel accuracy using a novel algorithm based on a non-linear least-square fitting to a sigmoid function. The whole contour position is generated through data fusion of the measured edge points. The proposed method provides robust process automation for the FRP production improving the process quality and reducing the scrap quota. Hence, the range of economically feasible FRP products can be increased and new market segments with cost sensitive products can be addressed.

  3. Computational approaches to vision

    NASA Technical Reports Server (NTRS)

    Barrow, H. G.; Tenenbaum, J. M.

    1986-01-01

    Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.

  4. Automated Low-Cost Smartphone-Based Lateral Flow Saliva Test Reader for Drugs-of-Abuse Detection

    PubMed Central

    Carrio, Adrian; Sampedro, Carlos; Sanchez-Lopez, Jose Luis; Pimienta, Miguel; Campoy, Pascual

    2015-01-01

    Lateral flow assay tests are nowadays becoming powerful, low-cost diagnostic tools. Obtaining a result is usually subject to visual interpretation of colored areas on the test by a human operator, introducing subjectivity and the possibility of errors in the extraction of the results. While automated test readers providing a result-consistent solution are widely available, they usually lack portability. In this paper, we present a smartphone-based automated reader for drug-of-abuse lateral flow assay tests, consisting of an inexpensive light box and a smartphone device. Test images captured with the smartphone camera are processed in the device using computer vision and machine learning techniques to perform automatic extraction of the results. A deep validation of the system has been carried out showing the high accuracy of the system. The proposed approach, applicable to any line-based or color-based lateral flow test in the market, effectively reduces the manufacturing costs of the reader and makes it portable and massively available while providing accurate, reliable results. PMID:26610513

  5. Automating the Processing of Earth Observation Data

    NASA Technical Reports Server (NTRS)

    Golden, Keith; Pang, Wan-Lin; Nemani, Ramakrishna; Votava, Petr

    2003-01-01

    NASA s vision for Earth science is to build a "sensor web": an adaptive array of heterogeneous satellites and other sensors that will track important events, such as storms, and provide real-time information about the state of the Earth to a wide variety of customers. Achieving this vision will require automation not only in the scheduling of the observations but also in the processing of the resulting data. To address this need, we are developing a planner-based agent to automatically generate and execute data-flow programs to produce the requested data products.

  6. POSE Algorithms for Automated Docking

    NASA Technical Reports Server (NTRS)

    Heaton, Andrew F.; Howard, Richard T.

    2011-01-01

    POSE (relative position and attitude) can be computed in many different ways. Given a sensor that measures bearing to a finite number of spots corresponding to known features (such as a target) of a spacecraft, a number of different algorithms can be used to compute the POSE. NASA has sponsored the development of a flash LIDAR proximity sensor called the Vision Navigation Sensor (VNS) for use by the Orion capsule in future docking missions. This sensor generates data that can be used by a variety of algorithms to compute POSE solutions inside of 15 meters, including at the critical docking range of approximately 1-2 meters. Previously NASA participated in a DARPA program called Orbital Express that achieved the first automated docking for the American space program. During this mission a large set of high quality mated sensor data was obtained at what is essentially the docking distance. This data set is perhaps the most accurate truth data in existence for docking proximity sensors in orbit. In this paper, the flight data from Orbital Express is used to test POSE algorithms at 1.22 meters range. Two different POSE algorithms are tested for two different Fields-of-View (FOVs) and two different pixel noise levels. The results of the analysis are used to predict future performance of the POSE algorithms with VNS data.

  7. Comparison of progressive addition lenses for general purpose and for computer vision: an office field study.

    PubMed

    Jaschinski, Wolfgang; König, Mirjam; Mekontso, Tiofil M; Ohlendorf, Arne; Welscher, Monique

    2015-05-01

    Two types of progressive addition lenses (PALs) were compared in an office field study: 1. General purpose PALs with continuous clear vision between infinity and near reading distances and 2. Computer vision PALs with a wider zone of clear vision at the monitor and in near vision but no clear distance vision. Twenty-three presbyopic participants wore each type of lens for two weeks in a double-masked four-week quasi-experimental procedure that included an adaptation phase (Weeks 1 and 2) and a test phase (Weeks 3 and 4). Questionnaires on visual and musculoskeletal conditions as well as preferences regarding the type of lenses were administered. After eight more weeks of free use of the spectacles, the preferences were assessed again. The ergonomic conditions were analysed from photographs. Head inclination when looking at the monitor was significantly lower by 2.3 degrees with the computer vision PALs than with the general purpose PALs. Vision at the monitor was judged significantly better with computer PALs, while distance vision was judged better with general purpose PALs; however, the reported advantage of computer vision PALs differed in extent between participants. Accordingly, 61 per cent of the participants preferred the computer vision PALs, when asked without information about lens design. After full information about lens characteristics and additional eight weeks of free spectacle use, 44 per cent preferred the computer vision PALs. On average, computer vision PALs were rated significantly better with respect to vision at the monitor during the experimental part of the study. In the final forced-choice ratings, approximately half of the participants preferred either the computer vision PAL or the general purpose PAL. Individual factors seem to play a role in this preference and in the rated advantage of computer vision PALs. © 2015 The Authors. Clinical and Experimental Optometry © 2015 Optometry Australia.

  8. Deep hierarchies in the primate visual cortex: what can we learn for computer vision?

    PubMed

    Krüger, Norbert; Janssen, Peter; Kalkan, Sinan; Lappe, Markus; Leonardis, Ales; Piater, Justus; Rodríguez-Sánchez, Antonio J; Wiskott, Laurenz

    2013-08-01

    Computational modeling of the primate visual system yields insights of potential relevance to some of the challenges that computer vision is facing, such as object recognition and categorization, motion detection and activity recognition, or vision-based navigation and manipulation. This paper reviews some functional principles and structures that are generally thought to underlie the primate visual cortex, and attempts to extract biological principles that could further advance computer vision research. Organized for a computer vision audience, we present functional principles of the processing hierarchies present in the primate visual system considering recent discoveries in neurophysiology. The hierarchical processing in the primate visual system is characterized by a sequence of different levels of processing (on the order of 10) that constitute a deep hierarchy in contrast to the flat vision architectures predominantly used in today's mainstream computer vision. We hope that the functional description of the deep hierarchies realized in the primate visual system provides valuable insights for the design of computer vision algorithms, fostering increasingly productive interaction between biological and computer vision research.

  9. Self correction of refractive error among young people in rural China: results of cross sectional investigation

    PubMed Central

    Zhang, Mingzhi; Zhang, Riping; He, Mingguang; Liang, Wanling; Li, Xiaofeng; She, Lingbing; Yang, Yunli; MacKenzie, Graeme; Silver, Joshua D; Ellwein, Leon; Moore, Bruce

    2011-01-01

    Objective To compare outcomes between adjustable spectacles and conventional methods for refraction in young people. Design Cross sectional study. Setting Rural southern China. Participants 648 young people aged 12-18 (mean 14.9 (SD 0.98)), with uncorrected visual acuity ≤6/12 in either eye. Interventions All participants underwent self refraction without cycloplegia (paralysis of near focusing ability with topical eye drops), automated refraction without cycloplegia, and subjective refraction by an ophthalmologist with cycloplegia. Main outcome measures Uncorrected and corrected vision, improvement of vision (lines on a chart), and refractive error. Results Among the participants, 59% (384) were girls, 44% (288) wore spectacles, and 61% (393/648) had 2.00 dioptres or more of myopia in the right eye. All completed self refraction. The proportion with visual acuity ≥6/7.5 in the better eye was 5.2% (95% confidence interval 3.6% to 6.9%) for uncorrected vision, 30.2% (25.7% to 34.8%) for currently worn spectacles, 96.9% (95.5% to 98.3%) for self refraction, 98.4% (97.4% to 99.5%) for automated refraction, and 99.1% (98.3% to 99.9%) for subjective refraction (P=0.033 for self refraction v automated refraction, P=0.001 for self refraction v subjective refraction). Improvements over uncorrected vision in the better eye with self refraction and subjective refraction were within one line on the eye chart in 98% of participants. In logistic regression models, failure to achieve maximum recorded visual acuity of 6/7.5 in right eyes with self refraction was associated with greater absolute value of myopia/hyperopia (P<0.001), greater astigmatism (P=0.001), and not having previously worn spectacles (P=0.002), but not age or sex. Significant inaccuracies in power (≥1.00 dioptre) were less common in right eyes with self refraction than with automated refraction (5% v 11%, P<0.001). Conclusions Though visual acuity was slightly worse with self refraction than automated or subjective refraction, acuity was excellent in nearly all these young people with inadequately corrected refractive error at baseline. Inaccurate power was less common with self refraction than automated refraction. Self refraction could decrease the requirement for scarce trained personnel, expensive devices, and cycloplegia in children’s vision programmes in rural China. PMID:21828207

  10. Early detection of glaucoma using fully automated disparity analysis of the optic nerve head (ONH) from stereo fundus images

    NASA Astrophysics Data System (ADS)

    Sharma, Archie; Corona, Enrique; Mitra, Sunanda; Nutter, Brian S.

    2006-03-01

    Early detection of structural damage to the optic nerve head (ONH) is critical in diagnosis of glaucoma, because such glaucomatous damage precedes clinically identifiable visual loss. Early detection of glaucoma can prevent progression of the disease and consequent loss of vision. Traditional early detection techniques involve observing changes in the ONH through an ophthalmoscope. Stereo fundus photography is also routinely used to detect subtle changes in the ONH. However, clinical evaluation of stereo fundus photographs suffers from inter- and intra-subject variability. Even the Heidelberg Retina Tomograph (HRT) has not been found to be sufficiently sensitive for early detection. A semi-automated algorithm for quantitative representation of the optic disc and cup contours by computing accumulated disparities in the disc and cup regions from stereo fundus image pairs has already been developed using advanced digital image analysis methodologies. A 3-D visualization of the disc and cup is achieved assuming camera geometry. High correlation among computer-generated and manually segmented cup to disc ratios in a longitudinal study involving 159 stereo fundus image pairs has already been demonstrated. However, clinical usefulness of the proposed technique can only be tested by a fully automated algorithm. In this paper, we present a fully automated algorithm for segmentation of optic cup and disc contours from corresponding stereo disparity information. Because this technique does not involve human intervention, it eliminates subjective variability encountered in currently used clinical methods and provides ophthalmologists with a cost-effective and quantitative method for detection of ONH structural damage for early detection of glaucoma.

  11. Perceptual organization in computer vision - A review and a proposal for a classificatory structure

    NASA Technical Reports Server (NTRS)

    Sarkar, Sudeep; Boyer, Kim L.

    1993-01-01

    The evolution of perceptual organization in biological vision, and its necessity in advanced computer vision systems, arises from the characteristic that perception, the extraction of meaning from sensory input, is an intelligent process. This is particularly so for high order organisms and, analogically, for more sophisticated computational models. The role of perceptual organization in computer vision systems is explored. This is done from four vantage points. First, a brief history of perceptual organization research in both humans and computer vision is offered. Next, a classificatory structure in which to cast perceptual organization research to clarify both the nomenclature and the relationships among the many contributions is proposed. Thirdly, the perceptual organization work in computer vision in the context of this classificatory structure is reviewed. Finally, the array of computational techniques applied to perceptual organization problems in computer vision is surveyed.

  12. Atlantic salmon skin and fillet color changes effected by perimortem handling stress, rigor mortis, and ice storage.

    PubMed

    Erikson, U; Misimi, E

    2008-03-01

    The changes in skin and fillet color of anesthetized and exhausted Atlantic salmon were determined immediately after killing, during rigor mortis, and after ice storage for 7 d. Skin color (CIE L*, a*, b*, and related values) was determined by a Minolta Chroma Meter. Roche SalmoFan Lineal and Roche Color Card values were determined by a computer vision method and a sensory panel. Before color assessment, the stress levels of the 2 fish groups were characterized in terms of white muscle parameters (pH, rigor mortis, and core temperature). The results showed that perimortem handling stress initially significantly affected several color parameters of skin and fillets. Significant transient fillet color changes also occurred in the prerigor phase and during the development of rigor mortis. Our results suggested that fillet color was affected by postmortem glycolysis (pH drop, particularly in anesthetized fillets), then by onset and development of rigor mortis. The color change patterns during storage were different for the 2 groups of fish. The computer vision method was considered suitable for automated (online) quality control and grading of salmonid fillets according to color.

  13. High-fidelity, low-cost, automated method to assess laparoscopic skills objectively.

    PubMed

    Gray, Richard J; Kahol, Kanav; Islam, Gazi; Smith, Marshall; Chapital, Alyssa; Ferrara, John

    2012-01-01

    We sought to define the extent to which a motion analysis-based assessment system constructed with simple equipment could measure technical skill objectively and quantitatively. An "off-the-shelf" digital video system was used to capture the hand and instrument movement of surgical trainees (beginner level = PGY-1, intermediate level = PGY-3, and advanced level = PGY-5/fellows) while they performed a peg transfer exercise. The video data were passed through a custom computer vision algorithm that analyzed incoming pixels to measure movement smoothness objectively. The beginner-level group had the poorest performance, whereas those in the advanced group generated the highest scores. Intermediate-level trainees scored significantly (p < 0.04) better than beginner trainees. Advanced-level trainees scored significantly better than intermediate-level trainees and beginner-level trainees (p < 0.04 and p < 0.03, respectively). A computer vision-based analysis of surgical movements provides an objective basis for technical expertise-level analysis with construct validity. The technology to capture the data is simple, low cost, and readily available, and it obviates the need for expert human assessment in this setting. Copyright © 2012 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  14. Automated Counting of Particles To Quantify Cleanliness

    NASA Technical Reports Server (NTRS)

    Rhode, James

    2005-01-01

    A machine vision system, similar to systems used in microbiological laboratories to count cultured microbes, has been proposed for quantifying the cleanliness of nominally precisely cleaned hardware by counting residual contaminant particles. The system would include a microscope equipped with an electronic camera and circuitry to digitize the camera output, a personal computer programmed with machine-vision and interface software, and digital storage media. A filter pad, through which had been aspirated solvent from rinsing the hardware in question, would be placed on the microscope stage. A high-resolution image of the filter pad would be recorded. The computer would analyze the image and present a histogram of sizes of particles on the filter. On the basis of the histogram and a measure of the desired level of cleanliness, the hardware would be accepted or rejected. If the hardware were accepted, the image would be saved, along with other information, as a quality record. If the hardware were rejected, the histogram and ancillary information would be recorded for analysis of trends. The software would perceive particles that are too large or too numerous to meet a specified particle-distribution profile. Anomalous particles or fibrous material would be flagged for inspection.

  15. A Computer Vision Approach to Identify Einstein Rings and Arcs

    NASA Astrophysics Data System (ADS)

    Lee, Chien-Hsiu

    2017-03-01

    Einstein rings are rare gems of strong lensing phenomena; the ring images can be used to probe the underlying lens gravitational potential at every position angles, tightly constraining the lens mass profile. In addition, the magnified images also enable us to probe high-z galaxies with enhanced resolution and signal-to-noise ratios. However, only a handful of Einstein rings have been reported, either from serendipitous discoveries or or visual inspections of hundred thousands of massive galaxies or galaxy clusters. In the era of large sky surveys, an automated approach to identify ring pattern in the big data to come is in high demand. Here, we present an Einstein ring recognition approach based on computer vision techniques. The workhorse is the circle Hough transform that recognise circular patterns or arcs in the images. We propose a two-tier approach by first pre-selecting massive galaxies associated with multiple blue objects as possible lens, than use Hough transform to identify circular pattern. As a proof-of-concept, we apply our approach to SDSS, with a high completeness, albeit with low purity. We also apply our approach to other lenses in DES, HSC-SSP, and UltraVISTA survey, illustrating the versatility of our approach.

  16. Fusion of monocular cues to detect man-made structures in aerial imagery

    NASA Technical Reports Server (NTRS)

    Shufelt, Jefferey; Mckeown, David M.

    1991-01-01

    The extraction of buildings from aerial imagery is a complex problem for automated computer vision. It requires locating regions in a scene that possess properties distinguishing them as man-made objects as opposed to naturally occurring terrain features. It is reasonable to assume that no single detection method can correctly delineate or verify buildings in every scene. A cooperative-methods paradigm is useful in approaching the building extraction problem. Using this paradigm, each extraction technique provides information which can be added or assimilated into an overall interpretation of the scene. Thus, the main objective is to explore the development of computer vision system that integrates the results of various scene analysis techniques into an accurate and robust interpretation of the underlying three dimensional scene. The problem of building hypothesis fusion in aerial imagery is discussed. Building extraction techniques are briefly surveyed, including four building extraction, verification, and clustering systems. A method for fusing the symbolic data generated by these systems is described, and applied to monocular image and stereo image data sets. Evaluation methods for the fusion results are described, and the fusion results are analyzed using these methods.

  17. Parallel computation of level set method for 500 Hz visual servo control

    NASA Astrophysics Data System (ADS)

    Fei, Xianfeng; Igarashi, Yasunobu; Hashimoto, Koichi

    2008-11-01

    We propose a 2D microorganism tracking system using a parallel level set method and a column parallel vision system (CPV). This system keeps a single microorganism in the middle of the visual field under a microscope by visual servoing an automated stage. We propose a new energy function for the level set method. This function constrains an amount of light intensity inside the detected object contour to control the number of the detected objects. This algorithm is implemented in CPV system and computational time for each frame is 2 [ms], approximately. A tracking experiment for about 25 s is demonstrated. Also we demonstrate a single paramecium can be kept tracking even if other paramecia appear in the visual field and contact with the tracked paramecium.

  18. Where to look? Automating attending behaviors of virtual human characters

    NASA Technical Reports Server (NTRS)

    Chopra Khullar, S.; Badler, N. I.

    2001-01-01

    This research proposes a computational framework for generating visual attending behavior in an embodied simulated human agent. Such behaviors directly control eye and head motions, and guide other actions such as locomotion and reach. The implementation of these concepts, referred to as the AVA, draws on empirical and qualitative observations known from psychology, human factors and computer vision. Deliberate behaviors, the analogs of scanpaths in visual psychology, compete with involuntary attention capture and lapses into idling or free viewing. Insights provided by implementing this framework are: a defined set of parameters that impact the observable effects of attention, a defined vocabulary of looking behaviors for certain motor and cognitive activity, a defined hierarchy of three levels of eye behavior (endogenous, exogenous and idling) and a proposed method of how these types interact.

  19. Machine vision system for online inspection of freshly slaughtered chickens

    USDA-ARS?s Scientific Manuscript database

    A machine vision system was developed and evaluated for the automation of online inspection to differentiate freshly slaughtered wholesome chickens from systemically diseased chickens. The system consisted of an electron-multiplying charge-coupled-device camera used with an imaging spectrograph and ...

  20. Automation and Optimization of Multipulse Laser Zona Drilling of Mouse Embryos During Embryo Biopsy.

    PubMed

    Wong, Christopher Yee; Mills, James K

    2017-03-01

    Laser zona drilling (LZD) is a required step in many embryonic surgical procedures, for example, assisted hatching and preimplantation genetic diagnosis. LZD involves the ablation of the zona pellucida (ZP) using a laser while minimizing potentially harmful thermal effects on critical internal cell structures. Develop a method for the automation and optimization of multipulse LZD, applied to cleavage-stage embryos. A two-stage optimization is used. The first stage uses computer vision algorithms to identify embryonic structures and determines the optimal ablation zone farthest away from critical structures such as blastomeres. The second stage combines a genetic algorithm with a previously reported thermal analysis of LZD to optimize the combination of laser pulse locations and pulse durations. The goal is to minimize the peak temperature experienced by the blastomeres while creating the desired opening in the ZP. A proof of concept of the proposed LZD automation and optimization method is demonstrated through experiments on mouse embryos with positive results, as adequately sized openings are created. Automation of LZD is feasible and is a viable step toward the automation of embryo biopsy procedures. LZD is a common but delicate procedure performed by human operators using subjective methods to gauge proper LZD procedure. Automation of LZD removes human error to increase the success rate of LZD. Although the proposed methods are developed for cleavage-stage embryos, the same methods may be applied to most types LZD procedures, embryos at different developmental stages, or nonembryonic cells.

  1. Automatic vision system for analysis of microscopic behavior of flow and transport in porous media

    NASA Astrophysics Data System (ADS)

    Rashidi, Mehdi; Dehmeshki, Jamshid; Dickenson, Eric; Daemi, M. Farhang

    1997-10-01

    This paper describes the development of a novel automated and efficient vision system to obtain velocity and concentration measurement within a porous medium. An aqueous fluid lace with a fluorescent dye to microspheres flows through a transparent, refractive-index-matched column packed with transparent crystals. For illumination purposes, a planar sheet of laser passes through the column as a CCD camera records all the laser illuminated planes. Detailed microscopic velocity and concentration fields have been computed within a 3D volume of the column. For measuring velocities, while the aqueous fluid, laced with fluorescent microspheres, flows through the transparent medium, a CCD camera records the motions of the fluorescing particles by a video cassette recorder. The recorded images are acquired automatically frame by frame and transferred to the computer for processing, by using a frame grabber an written relevant algorithms through an RS-232 interface. Since the grabbed image is poor in this stage, some preprocessings are used to enhance particles within images. Finally, these enhanced particles are monitored to calculate velocity vectors in the plane of the beam. For concentration measurements, while the aqueous fluid, laced with a fluorescent organic dye, flows through the transparent medium, a CCD camera sweeps back and forth across the column and records concentration slices on the planes illuminated by the laser beam traveling simultaneously with the camera. Subsequently, these recorded images are transferred to the computer for processing in similar fashion to the velocity measurement. In order to have a fully automatic vision system, several detailed image processing techniques are developed to match exact images that have different intensities values but the same topological characteristics. This results in normalized interstitial chemical concentrations as a function of time within the porous column.

  2. Identification of Cichlid Fishes from Lake Malawi Using Computer Vision

    PubMed Central

    Joo, Deokjin; Kwan, Ye-seul; Song, Jongwoo; Pinho, Catarina; Hey, Jody; Won, Yong-Jin

    2013-01-01

    Background The explosively radiating evolution of cichlid fishes of Lake Malawi has yielded an amazing number of haplochromine species estimated as many as 500 to 800 with a surprising degree of diversity not only in color and stripe pattern but also in the shape of jaw and body among them. As these morphological diversities have been a central subject of adaptive speciation and taxonomic classification, such high diversity could serve as a foundation for automation of species identification of cichlids. Methodology/Principal Finding Here we demonstrate a method for automatic classification of the Lake Malawi cichlids based on computer vision and geometric morphometrics. For this end we developed a pipeline that integrates multiple image processing tools to automatically extract informative features of color and stripe patterns from a large set of photographic images of wild cichlids. The extracted information was evaluated by statistical classifiers Support Vector Machine and Random Forests. Both classifiers performed better when body shape information was added to the feature of color and stripe. Besides the coloration and stripe pattern, body shape variables boosted the accuracy of classification by about 10%. The programs were able to classify 594 live cichlid individuals belonging to 12 different classes (species and sexes) with an average accuracy of 78%, contrasting to a mere 42% success rate by human eyes. The variables that contributed most to the accuracy were body height and the hue of the most frequent color. Conclusions Computer vision showed a notable performance in extracting information from the color and stripe patterns of Lake Malawi cichlids although the information was not enough for errorless species identification. Our results indicate that there appears an unavoidable difficulty in automatic species identification of cichlid fishes, which may arise from short divergence times and gene flow between closely related species. PMID:24204918

  3. Speed control for a mobile robot

    NASA Astrophysics Data System (ADS)

    Kolli, Kaylan C.; Mallikarjun, Sreeram; Kola, Krishnamohan; Hall, Ernest L.

    1997-09-01

    Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of a speed control for a modular autonomous mobile robot controller. The speed control of the traction motor is essential for safe operation of a mobile robot. The challenges of autonomous operation of a vehicle require safe, runaway and collision free operation. A mobile robot test-bed has been constructed using a golf cart base. The computer controlled speed control has been implemented and works with guidance provided by vision system and obstacle avoidance using ultrasonic sensors systems. A 486 computer through a 3- axis motion controller supervises the speed control. The traction motor is controlled via the computer by an EV-1 speed control. Testing of the system was done both in the lab and on an outside course with positive results. This design is a prototype and suggestions for improvements are also given. The autonomous speed controller is applicable for any computer controlled electric drive mobile vehicle.

  4. Development of a mobile robot for the 1995 AUVS competition

    NASA Astrophysics Data System (ADS)

    Matthews, Bradley O.; Ruthemeyer, Michael A.; Perdue, David; Hall, Ernest L.

    1995-12-01

    Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of a modular autonomous mobile robot controller. The advantages of a modular system are related to portability and the fact that any vehicle can become autonomous with minimal modifications. A mobile robot test-bed has been constructed using a golf cart base. This cart has full speed control with guidance provided by a vision system and obstacle avoidance using ultrasonic sensors systems. The speed and steering control are supervised by a 486 computer through a 3-axis motion controller. The obstacle avoidance system is based on a micro-controller interfaced with six ultrasonic transducers. The is micro-controller independently handles all timing and distance calculations and sends a steering angle correction back to the computer via the serial line. This design yields a portable independent system, where even computer communication is not necessary. Vision guidance is accomplished with a CCD camera with a zoom lens. The data is collected through a commercial tracking device, communicating with the computer the X,Y coordinates of the lane marker. Testing of these systems yielded positive results by showing that at five mph the vehicle can follow a line and at the same time avoid obstacles. This design, in its modularity, creates a portable autonomous controller applicable for any mobile vehicle with only minor adaptations.

  5. Convolutional neural network guided blue crab knuckle detection for autonomous crab meat picking machine

    NASA Astrophysics Data System (ADS)

    Wang, Dongyi; Vinson, Robert; Holmes, Maxwell; Seibel, Gary; Tao, Yang

    2018-04-01

    The Atlantic blue crab is among the highest-valued seafood found in the American Eastern Seaboard. Currently, the crab processing industry is highly dependent on manual labor. However, there is great potential for vision-guided intelligent machines to automate the meat picking process. Studies show that the back-fin knuckles are robust features containing information about a crab's size, orientation, and the position of the crab's meat compartments. Our studies also make it clear that detecting the knuckles reliably in images is challenging due to the knuckle's small size, anomalous shape, and similarity to joints in the legs and claws. An accurate and reliable computer vision algorithm was proposed to detect the crab's back-fin knuckles in digital images. Convolutional neural networks (CNNs) can localize rough knuckle positions with 97.67% accuracy, transforming a global detection problem into a local detection problem. Compared to the rough localization based on human experience or other machine learning classification methods, the CNN shows the best localization results. In the rough knuckle position, a k-means clustering method is able to further extract the exact knuckle positions based on the back-fin knuckle color features. The exact knuckle position can help us to generate a crab cutline in XY plane using a template matching method. This is a pioneering research project in crab image analysis and offers advanced machine intelligence for automated crab processing.

  6. Automated Video Quality Assessment for Deep-Sea Video

    NASA Astrophysics Data System (ADS)

    Pirenne, B.; Hoeberechts, M.; Kalmbach, A.; Sadhu, T.; Branzan Albu, A.; Glotin, H.; Jeffries, M. A.; Bui, A. O. V.

    2015-12-01

    Video provides a rich source of data for geophysical analysis, often supplying detailed information about the environment when other instruments may not. This is especially true of deep-sea environments, where direct visual observations cannot be made. As computer vision techniques improve and volumes of video data increase, automated video analysis is emerging as a practical alternative to labor-intensive manual analysis. Automated techniques can be much more sensitive to video quality than their manual counterparts, so performing quality assessment before doing full analysis is critical to producing valid results.Ocean Networks Canada (ONC), an initiative of the University of Victoria, operates cabled ocean observatories that supply continuous power and Internet connectivity to a broad suite of subsea instruments from the coast to the deep sea, including video and still cameras. This network of ocean observatories has produced almost 20,000 hours of video (about 38 hours are recorded each day) and an additional 8,000 hours of logs from remotely operated vehicle (ROV) dives. We begin by surveying some ways in which deep-sea video poses challenges for automated analysis, including: 1. Non-uniform lighting: Single, directional, light sources produce uneven luminance distributions and shadows; remotely operated lighting equipment are also susceptible to technical failures. 2. Particulate noise: Turbidity and marine snow are often present in underwater video; particles in the water column can have sharper focus and higher contrast than the objects of interest due to their proximity to the light source and can also influence the camera's autofocus and auto white-balance routines. 3. Color distortion (low contrast): The rate of absorption of light in water varies by wavelength, and is higher overall than in air, altering apparent colors and lowering the contrast of objects at a distance.We also describe measures under development at ONC for detecting and mitigating these effects. These steps include filtering out unusable data, color and luminance balancing, and choosing the most appropriate image descriptors. We apply these techniques to generate automated quality assessment of video data and illustrate their utility with an example application where we perform vision-based substrate classification.

  7. Objective and automated measurement of dynamic vision functions

    NASA Technical Reports Server (NTRS)

    Flom, M. C.; Adams, A. J.

    1976-01-01

    A phoria stimulus array and electro-oculographic (EOG) arrangements for measuring motor and sensory responses of subjects subjected to stress or drug conditions are described, along with experimental procedures. Heterophoria (as oculomotor function) and glare recovery time (time required for photochemical and neural recovery after exposure to a flash stimulus) are measured, in research aimed at developing automated objective measurement of dynamic vision functions. Onset of involuntary optokinetic nystagmus in subjects attempting to track moving stripes (while viewing through head-mounted binocular eyepieces) after exposure to glare serves as an objective measure of glare recovery time.

  8. Video-processing-based system for automated pedestrian data collection and analysis when crossing the street

    NASA Astrophysics Data System (ADS)

    Mansouri, Nabila; Watelain, Eric; Ben Jemaa, Yousra; Motamed, Cina

    2018-03-01

    Computer-vision techniques for pedestrian detection and tracking have progressed considerably and become widely used in several applications. However, a quick glance at the literature shows a minimal use of these techniques in pedestrian behavior and safety analysis, which might be due to the technical complexities facing the processing of pedestrian videos. To extract pedestrian trajectories from a video automatically, all road users must be detected and tracked during sequences, which is a challenging task, especially in a congested open-outdoor urban space. A multipedestrian tracker based on an interframe-detection-association process was proposed and evaluated. The tracker results are used to implement an automatic tool for pedestrians data collection when crossing the street based on video processing. The variations in the instantaneous speed allowed the detection of the street crossing phases (approach, waiting, and crossing). These were addressed for the first time in the pedestrian road security analysis to illustrate the causal relationship between pedestrian behaviors in the different phases. A comparison with a manual data collection method, by computing the root mean square error and the Pearson correlation coefficient, confirmed that the procedures proposed have significant potential to automate the data collection process.

  9. Scalable Nearest Neighbor Algorithms for High Dimensional Data.

    PubMed

    Muja, Marius; Lowe, David G

    2014-11-01

    For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching.

  10. Operator Station Design System - A computer aided design approach to work station layout

    NASA Technical Reports Server (NTRS)

    Lewis, J. L.

    1979-01-01

    The Operator Station Design System is resident in NASA's Johnson Space Center Spacecraft Design Division Performance Laboratory. It includes stand-alone minicomputer hardware and Panel Layout Automated Interactive Design and Crew Station Assessment of Reach software. The data base consists of the Shuttle Transportation System Orbiter Crew Compartment (in part), the Orbiter payload bay and remote manipulator (in part), and various anthropometric populations. The system is utilized to provide panel layouts, assess reach and vision, determine interference and fit problems early in the design phase, study design applications as a function of anthropometric and mission requirements, and to accomplish conceptual design to support advanced study efforts.

  11. Automatic identification of bacterial types using statistical imaging methods

    NASA Astrophysics Data System (ADS)

    Trattner, Sigal; Greenspan, Hayit; Tepper, Gapi; Abboud, Shimon

    2003-05-01

    The objective of the current study is to develop an automatic tool to identify bacterial types using computer-vision and statistical modeling techniques. Bacteriophage (phage)-typing methods are used to identify and extract representative profiles of bacterial types, such as the Staphylococcus Aureus. Current systems rely on the subjective reading of plaque profiles by human expert. This process is time-consuming and prone to errors, especially as technology is enabling the increase in the number of phages used for typing. The statistical methodology presented in this work, provides for an automated, objective and robust analysis of visual data, along with the ability to cope with increasing data volumes.

  12. The role of robotics in computer controlled polishing of large and small optics

    NASA Astrophysics Data System (ADS)

    Walker, David; Dunn, Christina; Yu, Guoyu; Bibby, Matt; Zheng, Xiao; Wu, Hsing Yu; Li, Hongyu; Lu, Chunlian

    2015-08-01

    Following formal acceptance by ESO of three 1.4m hexagonal off-axis prototype mirror segments, one circular segment, and certification of our optical test facility, we turn our attention to the challenge of segment mass-production. In this paper, we focus on the role of industrial robots, highlighting complementarity with Zeeko CNC polishing machines, and presenting results using robots to provide intermediate processing between CNC grinding and polishing. We also describe the marriage of robots and Zeeko machines to automate currently manual operations; steps towards our ultimate vision of fully autonomous manufacturing cells, with impact throughout the optical manufacturing community and beyond.

  13. Computer vision syndrome: a review.

    PubMed

    Blehm, Clayton; Vishnu, Seema; Khattak, Ashbala; Mitra, Shrabanee; Yee, Richard W

    2005-01-01

    As computers become part of our everyday life, more and more people are experiencing a variety of ocular symptoms related to computer use. These include eyestrain, tired eyes, irritation, redness, blurred vision, and double vision, collectively referred to as computer vision syndrome. This article describes both the characteristics and treatment modalities that are available at this time. Computer vision syndrome symptoms may be the cause of ocular (ocular-surface abnormalities or accommodative spasms) and/or extraocular (ergonomic) etiologies. However, the major contributor to computer vision syndrome symptoms by far appears to be dry eye. The visual effects of various display characteristics such as lighting, glare, display quality, refresh rates, and radiation are also discussed. Treatment requires a multidirectional approach combining ocular therapy with adjustment of the workstation. Proper lighting, anti-glare filters, ergonomic positioning of computer monitor and regular work breaks may help improve visual comfort. Lubricating eye drops and special computer glasses help relieve ocular surface-related symptoms. More work needs to be done to specifically define the processes that cause computer vision syndrome and to develop and improve effective treatments that successfully address these causes.

  14. Robust object tracking techniques for vision-based 3D motion analysis applications

    NASA Astrophysics Data System (ADS)

    Knyaz, Vladimir A.; Zheltov, Sergey Y.; Vishnyakov, Boris V.

    2016-04-01

    Automated and accurate spatial motion capturing of an object is necessary for a wide variety of applications including industry and science, virtual reality and movie, medicine and sports. For the most part of applications a reliability and an accuracy of the data obtained as well as convenience for a user are the main characteristics defining the quality of the motion capture system. Among the existing systems for 3D data acquisition, based on different physical principles (accelerometry, magnetometry, time-of-flight, vision-based), optical motion capture systems have a set of advantages such as high speed of acquisition, potential for high accuracy and automation based on advanced image processing algorithms. For vision-based motion capture accurate and robust object features detecting and tracking through the video sequence are the key elements along with a level of automation of capturing process. So for providing high accuracy of obtained spatial data the developed vision-based motion capture system "Mosca" is based on photogrammetric principles of 3D measurements and supports high speed image acquisition in synchronized mode. It includes from 2 to 4 technical vision cameras for capturing video sequences of object motion. The original camera calibration and external orientation procedures provide the basis for high accuracy of 3D measurements. A set of algorithms as for detecting, identifying and tracking of similar targets, so for marker-less object motion capture is developed and tested. The results of algorithms' evaluation show high robustness and high reliability for various motion analysis tasks in technical and biomechanics applications.

  15. Machine-vision-based roadway health monitoring and assessment : development of a shape-based pavement-crack-detection approach.

    DOT National Transportation Integrated Search

    2016-01-01

    State highway agencies (SHAs) routinely employ semi-automated and automated image-based methods for network-level : pavement-cracking data collection, and there are different types of pavement-cracking data collected by SHAs for reporting and : manag...

  16. Manifold learning in machine vision and robotics

    NASA Astrophysics Data System (ADS)

    Bernstein, Alexander

    2017-02-01

    Smart algorithms are used in Machine vision and Robotics to organize or extract high-level information from the available data. Nowadays, Machine learning is an essential and ubiquitous tool to automate extraction patterns or regularities from data (images in Machine vision; camera, laser, and sonar sensors data in Robotics) in order to solve various subject-oriented tasks such as understanding and classification of images content, navigation of mobile autonomous robot in uncertain environments, robot manipulation in medical robotics and computer-assisted surgery, and other. Usually such data have high dimensionality, however, due to various dependencies between their components and constraints caused by physical reasons, all "feasible and usable data" occupy only a very small part in high dimensional "observation space" with smaller intrinsic dimensionality. Generally accepted model of such data is manifold model in accordance with which the data lie on or near an unknown manifold (surface) of lower dimensionality embedded in an ambient high dimensional observation space; real-world high-dimensional data obtained from "natural" sources meet, as a rule, this model. The use of Manifold learning technique in Machine vision and Robotics, which discovers a low-dimensional structure of high dimensional data and results in effective algorithms for solving of a large number of various subject-oriented tasks, is the content of the conference plenary speech some topics of which are in the paper.

  17. Quaternions in computer vision and robotics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pervin, E.; Webb, J.A.

    1982-01-01

    Computer vision and robotics suffer from not having good tools for manipulating three-dimensional objects. Vectors, coordinate geometry, and trigonometry all have deficiencies. Quaternions can be used to solve many of these problems. Many properties of quaternions that are relevant to computer vision and robotics are developed. Examples are given showing how quaternions can be used to simplify derivations in computer vision and robotics.

  18. 75 FR 60478 - In the Matter of Certain Machine Vision Software, Machine Vision Systems, and Products Containing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-30

    ... Automation, Inc. (``Amistar'') of San Marcos, California; Techno Soft Systemnics, Inc. (``Techno Soft'') of... the claim terms ``test,'' ``match score surface,'' and ``gradient direction,'' all of his infringement... complainants' proposed construction for the claim terms ``test,'' ``match score surface,'' and ``gradient...

  19. Detection of eviscerated poultry spleen enlargement by machine vision

    NASA Astrophysics Data System (ADS)

    Tao, Yang; Shao, June J.; Skeeles, John K.; Chen, Yud-Ren

    1999-01-01

    The size of a poultry spleen is an indication of whether the bird is wholesomeness or has a virus-related disease. This study explored the possibility of detecting poultry spleen enlargement with a computer imaging system to assist human inspectors in food safety inspections. Images of 45-day-old hybrid turkey internal viscera were taken using fluorescent and UV lighting systems. Image processing algorithms including linear transformation, morphological operations, and statistical analyses were developed to distinguish the spleen from its surroundings and then to detect abnormal spleens. Experimental results demonstrated that the imaging method could effectively distinguish spleens from other organ and intestine. Based on a total sample of 57 birds, the classification rates were 92% from a self-test set, and 95% from an independent test set for the correct detection of normal and abnormal birds. The methodology indicated the feasibility of using automated machine vision systems in the future to inspect internal organs and check the wholesomeness of poultry carcasses.

  20. Neural Network Target Identification System for False Alarm Reduction

    NASA Technical Reports Server (NTRS)

    Ye, David; Edens, Weston; Lu, Thomas T.; Chao, Tien-Hsin

    2009-01-01

    A multi-stage automated target recognition (ATR) system has been designed to perform computer vision tasks with adequate proficiency in mimicking human vision. The system is able to detect, identify, and track targets of interest. Potential regions of interest (ROIs) are first identified by the detection stage using an Optimum Trade-off Maximum Average Correlation Height (OT-MACH) filter combined with a wavelet transform. False positives are then eliminated by the verification stage using feature extraction methods in conjunction with neural networks. Feature extraction transforms the ROIs using filtering and binning algorithms to create feature vectors. A feed forward back propagation neural network (NN) is then trained to classify each feature vector and remove false positives. This paper discusses the test of the system performance and parameter optimizations process which adapts the system to various targets and datasets. The test results show that the system was successful in substantially reducing the false positive rate when tested on a sonar image dataset.

  1. Benchmarking neuromorphic vision: lessons learnt from computer vision

    PubMed Central

    Tan, Cheston; Lallee, Stephane; Orchard, Garrick

    2015-01-01

    Neuromorphic Vision sensors have improved greatly since the first silicon retina was presented almost three decades ago. They have recently matured to the point where they are commercially available and can be operated by laymen. However, despite improved availability of sensors, there remains a lack of good datasets, while algorithms for processing spike-based visual data are still in their infancy. On the other hand, frame-based computer vision algorithms are far more mature, thanks in part to widely accepted datasets which allow direct comparison between algorithms and encourage competition. We are presented with a unique opportunity to shape the development of Neuromorphic Vision benchmarks and challenges by leveraging what has been learnt from the use of datasets in frame-based computer vision. Taking advantage of this opportunity, in this paper we review the role that benchmarks and challenges have played in the advancement of frame-based computer vision, and suggest guidelines for the creation of Neuromorphic Vision benchmarks and challenges. We also discuss the unique challenges faced when benchmarking Neuromorphic Vision algorithms, particularly when attempting to provide direct comparison with frame-based computer vision. PMID:26528120

  2. Gaylord Information Systems: Poised for Its Second Century.

    ERIC Educational Resources Information Center

    Farley, Charles E., Jr.; And Others

    1993-01-01

    Describes the development of the GALAXY Integrated Library System by Gaylord Information Systems. Topics addressed include the library automation business; industry trends, both long-term and short-term; a history of Gaylord's automation ventures; Gaylord's vision of the future; and perspectives from two GALAXY users. (LRW)

  3. Automated Diabetic Retinopathy Screening and Monitoring Using Retinal Fundus Image Analysis.

    PubMed

    Bhaskaranand, Malavika; Ramachandra, Chaithanya; Bhat, Sandeep; Cuadros, Jorge; Nittala, Muneeswar Gupta; Sadda, SriniVas; Solanki, Kaushal

    2016-02-16

    Diabetic retinopathy (DR)-a common complication of diabetes-is the leading cause of vision loss among the working-age population in the western world. DR is largely asymptomatic, but if detected at early stages the progression to vision loss can be significantly slowed. With the increasing diabetic population there is an urgent need for automated DR screening and monitoring. To address this growing need, in this article we discuss an automated DR screening tool and extend it for automated estimation of microaneurysm (MA) turnover, a potential biomarker for DR risk. The DR screening tool automatically analyzes color retinal fundus images from a patient encounter for the various DR pathologies and collates the information from all the images belonging to a patient encounter to generate a patient-level screening recommendation. The MA turnover estimation tool aligns retinal images from multiple encounters of a patient, localizes MAs, and performs MA dynamics analysis to evaluate new, persistent, and disappeared lesion maps and estimate MA turnover rates. The DR screening tool achieves 90% sensitivity at 63.2% specificity on a data set of 40 542 images from 5084 patient encounters obtained from the EyePACS telescreening system. On a subset of 7 longitudinal pairs the MA turnover estimation tool identifies new and disappeared MAs with 100% sensitivity and average false positives of 0.43 and 1.6 respectively. The presented automated tools have the potential to address the growing need for DR screening and monitoring, thereby saving vision of millions of diabetic patients worldwide. © 2016 Diabetes Technology Society.

  4. Computer vision-based diameter maps to study fluoroscopic recordings of small intestinal motility from conscious experimental animals.

    PubMed

    Ramírez, I; Pantrigo, J J; Montemayor, A S; López-Pérez, A E; Martín-Fontelles, M I; Brookes, S J H; Abalo, R

    2017-08-01

    When available, fluoroscopic recordings are a relatively cheap, non-invasive and technically straightforward way to study gastrointestinal motility. Spatiotemporal maps have been used to characterize motility of intestinal preparations in vitro, or in anesthetized animals in vivo. Here, a new automated computer-based method was used to construct spatiotemporal motility maps from fluoroscopic recordings obtained in conscious rats. Conscious, non-fasted, adult, male Wistar rats (n=8) received intragastric administration of barium contrast, and 1-2 hours later, when several loops of the small intestine were well-defined, a 2 minutes-fluoroscopic recording was obtained. Spatiotemporal diameter maps (Dmaps) were automatically calculated from the recordings. Three recordings were also manually analyzed for comparison. Frequency analysis was performed in order to calculate relevant motility parameters. In each conscious rat, a stable recording (17-20 seconds) was analyzed. The Dmaps manually and automatically obtained from the same recording were comparable, but the automated process was faster and provided higher resolution. Two frequencies of motor activity dominated; lower frequency contractions (15.2±0.9 cpm) had an amplitude approximately five times greater than higher frequency events (32.8±0.7 cpm). The automated method developed here needed little investigator input, provided high-resolution results with short computing times, and automatically compensated for breathing and other small movements, allowing recordings to be made without anesthesia. Although slow and/or infrequent events could not be detected in the short recording periods analyzed to date (17-20 seconds), this novel system enhances the analysis of in vivo motility in conscious animals. © 2017 John Wiley & Sons Ltd.

  5. The use of morphological characteristics and texture analysis in the identification of tissue composition in prostatic neoplasia.

    PubMed

    Diamond, James; Anderson, Neil H; Bartels, Peter H; Montironi, Rodolfo; Hamilton, Peter W

    2004-09-01

    Quantitative examination of prostate histology offers clues in the diagnostic classification of lesions and in the prediction of response to treatment and prognosis. To facilitate the collection of quantitative data, the development of machine vision systems is necessary. This study explored the use of imaging for identifying tissue abnormalities in prostate histology. Medium-power histological scenes were recorded from whole-mount radical prostatectomy sections at x 40 objective magnification and assessed by a pathologist as exhibiting stroma, normal tissue (nonneoplastic epithelial component), or prostatic carcinoma (PCa). A machine vision system was developed that divided the scenes into subregions of 100 x 100 pixels and subjected each to image-processing techniques. Analysis of morphological characteristics allowed the identification of normal tissue. Analysis of image texture demonstrated that Haralick feature 4 was the most suitable for discriminating stroma from PCa. Using these morphological and texture measurements, it was possible to define a classification scheme for each subregion. The machine vision system is designed to integrate these classification rules and generate digital maps of tissue composition from the classification of subregions; 79.3% of subregions were correctly classified. Established classification rates have demonstrated the validity of the methodology on small scenes; a logical extension was to apply the methodology to whole slide images via scanning technology. The machine vision system is capable of classifying these images. The machine vision system developed in this project facilitates the exploration of morphological and texture characteristics in quantifying tissue composition. It also illustrates the potential of quantitative methods to provide highly discriminatory information in the automated identification of prostatic lesions using computer vision.

  6. Fully automated, deep learning segmentation of oxygen-induced retinopathy images

    PubMed Central

    Xiao, Sa; Bucher, Felicitas; Wu, Yue; Rokem, Ariel; Lee, Cecilia S.; Marra, Kyle V.; Fallon, Regis; Diaz-Aguilar, Sophia; Aguilar, Edith; Friedlander, Martin; Lee, Aaron Y.

    2017-01-01

    Oxygen-induced retinopathy (OIR) is a widely used model to study ischemia-driven neovascularization (NV) in the retina and to serve in proof-of-concept studies in evaluating antiangiogenic drugs for ocular, as well as nonocular, diseases. The primary parameters that are analyzed in this mouse model include the percentage of retina with vaso-obliteration (VO) and NV areas. However, quantification of these two key variables comes with a great challenge due to the requirement of human experts to read the images. Human readers are costly, time-consuming, and subject to bias. Using recent advances in machine learning and computer vision, we trained deep learning neural networks using over a thousand segmentations to fully automate segmentation in OIR images. While determining the percentage area of VO, our algorithm achieved a similar range of correlation coefficients to that of expert inter-human correlation coefficients. In addition, our algorithm achieved a higher range of correlation coefficients compared with inter-expert correlation coefficients for quantification of the percentage area of neovascular tufts. In summary, we have created an open-source, fully automated pipeline for the quantification of key values of OIR images using deep learning neural networks. PMID:29263301

  7. (Computer) Vision without Sight

    PubMed Central

    Manduchi, Roberto; Coughlan, James

    2012-01-01

    Computer vision holds great promise for helping persons with blindness or visual impairments (VI) to interpret and explore the visual world. To this end, it is worthwhile to assess the situation critically by understanding the actual needs of the VI population and which of these needs might be addressed by computer vision. This article reviews the types of assistive technology application areas that have already been developed for VI, and the possible roles that computer vision can play in facilitating these applications. We discuss how appropriate user interfaces are designed to translate the output of computer vision algorithms into information that the user can quickly and safely act upon, and how system-level characteristics affect the overall usability of an assistive technology. Finally, we conclude by highlighting a few novel and intriguing areas of application of computer vision to assistive technology. PMID:22815563

  8. Microscope self-calibration based on micro laser line imaging and soft computing algorithms

    NASA Astrophysics Data System (ADS)

    Apolinar Muñoz Rodríguez, J.

    2018-06-01

    A technique to perform microscope self-calibration via micro laser line and soft computing algorithms is presented. In this technique, the microscope vision parameters are computed by means of soft computing algorithms based on laser line projection. To implement the self-calibration, a microscope vision system is constructed by means of a CCD camera and a 38 μm laser line. From this arrangement, the microscope vision parameters are represented via Bezier approximation networks, which are accomplished through the laser line position. In this procedure, a genetic algorithm determines the microscope vision parameters by means of laser line imaging. Also, the approximation networks compute the three-dimensional vision by means of the laser line position. Additionally, the soft computing algorithms re-calibrate the vision parameters when the microscope vision system is modified during the vision task. The proposed self-calibration improves accuracy of the traditional microscope calibration, which is accomplished via external references to the microscope system. The capability of the self-calibration based on soft computing algorithms is determined by means of the calibration accuracy and the micro-scale measurement error. This contribution is corroborated by an evaluation based on the accuracy of the traditional microscope calibration.

  9. An Automated Mouse Tail Vascular Access System by Vision and Pressure Feedback.

    PubMed

    Chang, Yen-Chi; Berry-Pusey, Brittany; Yasin, Rashid; Vu, Nam; Maraglia, Brandon; Chatziioannou, Arion X; Tsao, Tsu-Chin

    2015-08-01

    This paper develops an automated vascular access system (A-VAS) with novel vision-based vein and needle detection methods and real-time pressure feedback for murine drug delivery. Mouse tail vein injection is a routine but critical step for preclinical imaging applications. Due to the small vein diameter and external disturbances such as tail hair, pigmentation, and scales, identifying vein location is difficult and manual injections usually result in poor repeatability. To improve the injection accuracy, consistency, safety, and processing time, A-VAS was developed to overcome difficulties in vein detection noise rejection, robustness in needle tracking, and visual servoing integration with the mechatronics system.

  10. Vision-based guidance for an automated roving vehicle

    NASA Technical Reports Server (NTRS)

    Griffin, M. D.; Cunningham, R. T.; Eskenazi, R.

    1978-01-01

    A controller designed to guide an automated vehicle to a specified target without external intervention is described. The intended application is to the requirements of planetary exploration, where substantial autonomy is required because of the prohibitive time lags associated with closed-loop ground control. The guidance algorithm consists of a set of piecewise-linear control laws for velocity and steering commands, and is executable in real time with fixed-point arithmetic. The use of a previously-reported object tracking algorithm for the vision system to provide position feedback data is described. Test results of the control system on a breadboard rover at the Jet Propulsion Laboratory are included.

  11. Automated and connected vehicle (AV/CV) test bed to improve transit, bicycle, and pedestrian safety : concept of operations plan.

    DOT National Transportation Integrated Search

    2017-02-01

    This document presents the Concept of Operations (ConOps) Plan for the Automated and Connected Vehicle (AV/CV) Test Bed to Improve Transit, Bicycle, and Pedestrian Safety. As illustrated in Figure 1, the plan presents the overarching vision and goals...

  12. Cell-Detection Technique for Automated Patch Clamping

    NASA Technical Reports Server (NTRS)

    McDowell, Mark; Gray, Elizabeth

    2008-01-01

    A unique and customizable machinevision and image-data-processing technique has been developed for use in automated identification of cells that are optimal for patch clamping. [Patch clamping (in which patch electrodes are pressed against cell membranes) is an electrophysiological technique widely applied for the study of ion channels, and of membrane proteins that regulate the flow of ions across the membranes. Patch clamping is used in many biological research fields such as neurobiology, pharmacology, and molecular biology.] While there exist several hardware techniques for automated patch clamping of cells, very few of those techniques incorporate machine vision for locating cells that are ideal subjects for patch clamping. In contrast, the present technique is embodied in a machine-vision algorithm that, in practical application, enables the user to identify good and bad cells for patch clamping in an image captured by a charge-coupled-device (CCD) camera attached to a microscope, within a processing time of one second. Hence, the present technique can save time, thereby increasing efficiency and reducing cost. The present technique involves the utilization of cell-feature metrics to accurately make decisions on the degree to which individual cells are "good" or "bad" candidates for patch clamping. These metrics include position coordinates (x,y) in the image plane, major-axis length, minor-axis length, area, elongation, roundness, smoothness, angle of orientation, and degree of inclusion in the field of view. The present technique does not require any special hardware beyond commercially available, off-the-shelf patch-clamping hardware: A standard patchclamping microscope system with an attached CCD camera, a personal computer with an imagedata- processing board, and some experience in utilizing imagedata- processing software are all that are needed. A cell image is first captured by the microscope CCD camera and image-data-processing board, then the image data are analyzed by software that implements the present machine-vision technique. This analysis results in the identification of cells that are "good" candidates for patch clamping (see figure). Once a "good" cell is identified, a patch clamp can be effected by an automated patchclamping apparatus or by a human operator. This technique has been shown to enable reliable identification of "good" and "bad" candidate cells for patch clamping. The ultimate goal in further development of this technique is to combine artificial-intelligence processing with instrumentation and controls in order to produce a complete "turnkey" automated patch-clamping system capable of accurately and reliably patch clamping cells with a minimum intervention by a human operator. Moreover, this technique can be adapted to virtually any cellular-analysis procedure that includes repetitive operation of microscope hardware by a human.

  13. Lumber Scanning System for Surface Defect Detection

    Treesearch

    D. Earl Kline; Y. Jason Hou; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman

    1992-01-01

    This paper describes research aimed at developing a machine vision technology to drive automated processes in the hardwood forest products manufacturing industry. An industrial-scale machine vision system has been designed to scan variable-size hardwood lumber for detecting important features that influence the grade and value of lumber such as knots, holes, wane,...

  14. Coordinating complex decision support activities across distributed applications

    NASA Technical Reports Server (NTRS)

    Adler, Richard M.

    1994-01-01

    Knowledge-based technologies have been applied successfully to automate planning and scheduling in many problem domains. Automation of decision support can be increased further by integrating task-specific applications with supporting database systems, and by coordinating interactions between such tools to facilitate collaborative activities. Unfortunately, the technical obstacles that must be overcome to achieve this vision of transparent, cooperative problem-solving are daunting. Intelligent decision support tools are typically developed for standalone use, rely on incompatible, task-specific representational models and application programming interfaces (API's), and run on heterogeneous computing platforms. Getting such applications to interact freely calls for platform independent capabilities for distributed communication, as well as tools for mapping information across disparate representations. Symbiotics is developing a layered set of software tools (called NetWorks! for integrating and coordinating heterogeneous distributed applications. he top layer of tools consists of an extensible set of generic, programmable coordination services. Developers access these services via high-level API's to implement the desired interactions between distributed applications.

  15. Deep Space Network information system architecture study

    NASA Technical Reports Server (NTRS)

    Beswick, C. A.; Markley, R. W. (Editor); Atkinson, D. J.; Cooper, L. P.; Tausworthe, R. C.; Masline, R. C.; Jenkins, J. S.; Crowe, R. A.; Thomas, J. L.; Stoloff, M. J.

    1992-01-01

    The purpose of this article is to describe an architecture for the DSN information system in the years 2000-2010 and to provide guidelines for its evolution during the 1990's. The study scope is defined to be from the front-end areas at the antennas to the end users (spacecraft teams, principal investigators, archival storage systems, and non-NASA partners). The architectural vision provides guidance for major DSN implementation efforts during the next decade. A strong motivation for the study is an expected dramatic improvement in information-systems technologies--i.e., computer processing, automation technology (including knowledge-based systems), networking and data transport, software and hardware engineering, and human-interface technology. The proposed Ground Information System has the following major features: unified architecture from the front-end area to the end user; open-systems standards to achieve interoperability; DSN production of level 0 data; delivery of level 0 data from the Deep Space Communications Complex, if desired; dedicated telemetry processors for each receiver; security against unauthorized access and errors; and highly automated monitor and control.

  16. On the performances of computer vision algorithms on mobile platforms

    NASA Astrophysics Data System (ADS)

    Battiato, S.; Farinella, G. M.; Messina, E.; Puglisi, G.; Ravì, D.; Capra, A.; Tomaselli, V.

    2012-01-01

    Computer Vision enables mobile devices to extract the meaning of the observed scene from the information acquired with the onboard sensor cameras. Nowadays, there is a growing interest in Computer Vision algorithms able to work on mobile platform (e.g., phone camera, point-and-shot-camera, etc.). Indeed, bringing Computer Vision capabilities on mobile devices open new opportunities in different application contexts. The implementation of vision algorithms on mobile devices is still a challenging task since these devices have poor image sensors and optics as well as limited processing power. In this paper we have considered different algorithms covering classic Computer Vision tasks: keypoint extraction, face detection, image segmentation. Several tests have been done to compare the performances of the involved mobile platforms: Nokia N900, LG Optimus One, Samsung Galaxy SII.

  17. Towards computer-assisted TTTS: Laser ablation detection for workflow segmentation from fetoscopic video.

    PubMed

    Vasconcelos, Francisco; Brandão, Patrick; Vercauteren, Tom; Ourselin, Sebastien; Deprest, Jan; Peebles, Donald; Stoyanov, Danail

    2018-06-27

    Intrauterine foetal surgery is the treatment option for several congenital malformations. For twin-to-twin transfusion syndrome (TTTS), interventions involve the use of laser fibre to ablate vessels in a shared placenta. The procedure presents a number of challenges for the surgeon, and computer-assisted technologies can potentially be a significant support. Vision-based sensing is the primary source of information from the intrauterine environment, and hence, vision approaches present an appealing approach for extracting higher level information from the surgical site. In this paper, we propose a framework to detect one of the key steps during TTTS interventions-ablation. We adopt a deep learning approach, specifically the ResNet101 architecture, for classification of different surgical actions performed during laser ablation therapy. We perform a two-fold cross-validation using almost 50 k frames from five different TTTS ablation procedures. Our results show that deep learning methods are a promising approach for ablation detection. To our knowledge, this is the first attempt at automating photocoagulation detection using video and our technique can be an important component of a larger assistive framework for enhanced foetal therapies. The current implementation does not include semantic segmentation or localisation of the ablation site, and this would be a natural extension in future work.

  18. On-Chip Imaging of Schistosoma haematobium Eggs in Urine for Diagnosis by Computer Vision

    PubMed Central

    Linder, Ewert; Grote, Anne; Varjo, Sami; Linder, Nina; Lebbad, Marianne; Lundin, Mikael; Diwan, Vinod; Hannuksela, Jari; Lundin, Johan

    2013-01-01

    Background Microscopy, being relatively easy to perform at low cost, is the universal diagnostic method for detection of most globally important parasitic infections. As quality control is hard to maintain, misdiagnosis is common, which affects both estimates of parasite burdens and patient care. Novel techniques for high-resolution imaging and image transfer over data networks may offer solutions to these problems through provision of education, quality assurance and diagnostics. Imaging can be done directly on image sensor chips, a technique possible to exploit commercially for the development of inexpensive “mini-microscopes”. Images can be transferred for analysis both visually and by computer vision both at point-of-care and at remote locations. Methods/Principal Findings Here we describe imaging of helminth eggs using mini-microscopes constructed from webcams and mobile phone cameras. The results show that an inexpensive webcam, stripped off its optics to allow direct application of the test sample on the exposed surface of the sensor, yields images of Schistosoma haematobium eggs, which can be identified visually. Using a highly specific image pattern recognition algorithm, 4 out of 5 eggs observed visually could be identified. Conclusions/Significance As proof of concept we show that an inexpensive imaging device, such as a webcam, may be easily modified into a microscope, for the detection of helminth eggs based on on-chip imaging. Furthermore, algorithms for helminth egg detection by machine vision can be generated for automated diagnostics. The results can be exploited for constructing simple imaging devices for low-cost diagnostics of urogenital schistosomiasis and other neglected tropical infectious diseases. PMID:24340107

  19. Computer-Assisted Transgenesis of Caenorhabditis elegans for Deep Phenotyping

    PubMed Central

    Gilleland, Cody L.; Falls, Adam T.; Noraky, James; Heiman, Maxwell G.; Yanik, Mehmet F.

    2015-01-01

    A major goal in the study of human diseases is to assign functions to genes or genetic variants. The model organism Caenorhabditis elegans provides a powerful tool because homologs of many human genes are identifiable, and large collections of genetic vectors and mutant strains are available. However, the delivery of such vector libraries into mutant strains remains a long-standing experimental bottleneck for phenotypic analysis. Here, we present a computer-assisted microinjection platform to streamline the production of transgenic C. elegans with multiple vectors for deep phenotyping. Briefly, animals are immobilized in a temperature-sensitive hydrogel using a standard multiwell platform. Microinjections are then performed under control of an automated microscope using precision robotics driven by customized computer vision algorithms. We demonstrate utility by phenotyping the morphology of 12 neuronal classes in six mutant backgrounds using combinations of neuron-type-specific fluorescent reporters. This technology can industrialize the assignment of in vivo gene function by enabling large-scale transgenic engineering. PMID:26163188

  20. Multi-Stage System for Automatic Target Recognition

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin; Lu, Thomas T.; Ye, David; Edens, Weston; Johnson, Oliver

    2010-01-01

    A multi-stage automated target recognition (ATR) system has been designed to perform computer vision tasks with adequate proficiency in mimicking human vision. The system is able to detect, identify, and track targets of interest. Potential regions of interest (ROIs) are first identified by the detection stage using an Optimum Trade-off Maximum Average Correlation Height (OT-MACH) filter combined with a wavelet transform. False positives are then eliminated by the verification stage using feature extraction methods in conjunction with neural networks. Feature extraction transforms the ROIs using filtering and binning algorithms to create feature vectors. A feedforward back-propagation neural network (NN) is then trained to classify each feature vector and to remove false positives. The system parameter optimizations process has been developed to adapt to various targets and datasets. The objective was to design an efficient computer vision system that can learn to detect multiple targets in large images with unknown backgrounds. Because the target size is small relative to the image size in this problem, there are many regions of the image that could potentially contain the target. A cursory analysis of every region can be computationally efficient, but may yield too many false positives. On the other hand, a detailed analysis of every region can yield better results, but may be computationally inefficient. The multi-stage ATR system was designed to achieve an optimal balance between accuracy and computational efficiency by incorporating both models. The detection stage first identifies potential ROIs where the target may be present by performing a fast Fourier domain OT-MACH filter-based correlation. Because threshold for this stage is chosen with the goal of detecting all true positives, a number of false positives are also detected as ROIs. The verification stage then transforms the regions of interest into feature space, and eliminates false positives using an artificial neural network classifier. The multi-stage system allows tuning the detection sensitivity and the identification specificity individually in each stage. It is easier to achieve optimized ATR operation based on its specific goal. The test results show that the system was successful in substantially reducing the false positive rate when tested on a sonar and video image datasets.

  1. High-accuracy microassembly by intelligent vision systems and smart sensor integration

    NASA Astrophysics Data System (ADS)

    Schilp, Johannes; Harfensteller, Mark; Jacob, Dirk; Schilp, Michael

    2003-10-01

    Innovative production processes and strategies from batch production to high volume scale are playing a decisive role in generating microsystems economically. In particular assembly processes are crucial operations during the production of microsystems. Due to large batch sizes many microsystems can be produced economically by conventional assembly techniques using specialized and highly automated assembly systems. At laboratory stage microsystems are mostly assembled by hand. Between these extremes there is a wide field of small and middle sized batch production wherefore common automated solutions rarely are profitable. For assembly processes at these batch sizes a flexible automated assembly system has been developed at the iwb. It is based on a modular design. Actuators like grippers, dispensers or other process tools can easily be attached due to a special tool changing system. Therefore new joining techniques can easily be implemented. A force-sensor and a vision system are integrated into the tool head. The automated assembly processes are based on different optical sensors and smart actuators like high-accuracy robots or linear-motors. A fiber optic sensor is integrated in the dispensing module to measure contactless the clearance between the dispense needle and the substrate. Robot vision systems using the strategy of optical pattern recognition are also implemented as modules. In combination with relative positioning strategies, an assembly accuracy of the assembly system of less than 3 μm can be realized. A laser system is used for manufacturing processes like soldering.

  2. Prototyping an automated lumber processing system

    Treesearch

    Powsiri Klinkhachorn; Ravi Kothari; Henry A. Huber; Charles W. McMillin; K. Mukherjee; V. Barnekov

    1993-01-01

    The Automated Lumber Processing System (ALPS)is a multi-disciplinary continuing effort directed toward increasing the yield obtained from hardwood lumber boards during their process of remanufacture into secondary products (furniture, etc.). ALPS proposes a nondestructive vision system to scan a board for its dimension and the location and expanse of surface defects on...

  3. Towards fully automated Identification of Vesicle-Membrane Fusion Events in TIRF Microscopy

    NASA Astrophysics Data System (ADS)

    Vallotton, Pascal; James, David E.; Hughes, William E.

    2007-11-01

    Total Internal Reflection Fluorescence Microscopy (TIRFM) is imposing itself as the tool of choice for studying biological activity in close proximity to the plasma membrane. For example, the exquisite selectivity of TIRFM allows monitoring the diffusion of GFP-phogrin vesicles and their recruitment to the plasma membrane in pancreatic β-cells. We present a novel computer vision system for automatically identifying the elusive fusion events of GFP-phogrin vesicles with the plasma membrane. Our method is based on robust object tracking and matched filtering. It should accelerate the quantification of TIRFM data and allow the extraction of more biological information from image data to support research in diabetes and obesity.

  4. Pyramidal neurovision architecture for vision machines

    NASA Astrophysics Data System (ADS)

    Gupta, Madan M.; Knopf, George K.

    1993-08-01

    The vision system employed by an intelligent robot must be active; active in the sense that it must be capable of selectively acquiring the minimal amount of relevant information for a given task. An efficient active vision system architecture that is based loosely upon the parallel-hierarchical (pyramidal) structure of the biological visual pathway is presented in this paper. Although the computational architecture of the proposed pyramidal neuro-vision system is far less sophisticated than the architecture of the biological visual pathway, it does retain some essential features such as the converging multilayered structure of its biological counterpart. In terms of visual information processing, the neuro-vision system is constructed from a hierarchy of several interactive computational levels, whereupon each level contains one or more nonlinear parallel processors. Computationally efficient vision machines can be developed by utilizing both the parallel and serial information processing techniques within the pyramidal computing architecture. A computer simulation of a pyramidal vision system for active scene surveillance is presented.

  5. Three-dimensional Imaging and Scanning: Current and Future Applications for Pathology

    PubMed Central

    Farahani, Navid; Braun, Alex; Jutt, Dylan; Huffman, Todd; Reder, Nick; Liu, Zheng; Yagi, Yukako; Pantanowitz, Liron

    2017-01-01

    Imaging is vital for the assessment of physiologic and phenotypic details. In the past, biomedical imaging was heavily reliant on analog, low-throughput methods, which would produce two-dimensional images. However, newer, digital, and high-throughput three-dimensional (3D) imaging methods, which rely on computer vision and computer graphics, are transforming the way biomedical professionals practice. 3D imaging has been useful in diagnostic, prognostic, and therapeutic decision-making for the medical and biomedical professions. Herein, we summarize current imaging methods that enable optimal 3D histopathologic reconstruction: Scanning, 3D scanning, and whole slide imaging. Briefly mentioned are emerging platforms, which combine robotics, sectioning, and imaging in their pursuit to digitize and automate the entire microscopy workflow. Finally, both current and emerging 3D imaging methods are discussed in relation to current and future applications within the context of pathology. PMID:28966836

  6. Detection of motile micro-organisms in biological samples by means of a fully automated image processing system

    NASA Astrophysics Data System (ADS)

    Alanis, Elvio; Romero, Graciela; Alvarez, Liliana; Martinez, Carlos C.; Hoyos, Daniel; Basombrio, Miguel A.

    2001-08-01

    A fully automated image processing system for detection of motile microorganism is biological samples is presented. The system is specifically calibrated for determining the concentration of Trypanosoma Cruzi parasites in blood samples of mice infected with Chagas disease. The method can be adapted for use in other biological samples. A thin layer of blood infected by T. cruzi parasites is examined in a common microscope in which the images of the vision field are taken by a CCD camera and temporarily stored in the computer memory. In a typical field, a few motile parasites are observable surrounded by blood red cells. The parasites have low contrast. Thus, they are difficult to detect visually but their great motility betrays their presence by the movement of the nearest neighbor red cells. Several consecutive images of the same field are taken, decorrelated with each other where parasites are present, and digitally processed in order to measure the number of parasites present in the field. Several fields are sequentially processed in the same fashion, displacing the sample by means of step motors driven by the computer. A direct advantage of this system is that its results are more reliable and the process is less time consuming than the current subjective evaluations made visually by technicians.

  7. [Comparison study between biological vision and computer vision].

    PubMed

    Liu, W; Yuan, X G; Yang, C X; Liu, Z Q; Wang, R

    2001-08-01

    The development and bearing of biology vision in structure and mechanism were discussed, especially on the aspects including anatomical structure of biological vision, tentative classification of reception field, parallel processing of visual information, feedback and conformity effect of visual cortical, and so on. The new advance in the field was introduced through the study of the morphology of biological vision. Besides, comparison between biological vision and computer vision was made, and their similarities and differences were pointed out.

  8. Reinforcement learning in computer vision

    NASA Astrophysics Data System (ADS)

    Bernstein, A. V.; Burnaev, E. V.

    2018-04-01

    Nowadays, machine learning has become one of the basic technologies used in solving various computer vision tasks such as feature detection, image segmentation, object recognition and tracking. In many applications, various complex systems such as robots are equipped with visual sensors from which they learn state of surrounding environment by solving corresponding computer vision tasks. Solutions of these tasks are used for making decisions about possible future actions. It is not surprising that when solving computer vision tasks we should take into account special aspects of their subsequent application in model-based predictive control. Reinforcement learning is one of modern machine learning technologies in which learning is carried out through interaction with the environment. In recent years, Reinforcement learning has been used both for solving such applied tasks as processing and analysis of visual information, and for solving specific computer vision problems such as filtering, extracting image features, localizing objects in scenes, and many others. The paper describes shortly the Reinforcement learning technology and its use for solving computer vision problems.

  9. Stereo Vision Inside Tire

    DTIC Science & Technology

    2015-08-21

    using the Open Computer Vision ( OpenCV ) libraries [6] for computer vision and the Qt library [7] for the user interface. The software has the...depth. The software application calibrates the cameras using the plane based calibration model from the OpenCV calib3D module and allows the...6] OpenCV . 2015. OpenCV Open Source Computer Vision. [Online]. Available at: opencv.org [Accessed]: 09/01/2015. [7] Qt. 2015. Qt Project home

  10. Neural Networks for Computer Vision: A Framework for Specifications of a General Purpose Vision System

    NASA Astrophysics Data System (ADS)

    Skrzypek, Josef; Mesrobian, Edmond; Gungner, David J.

    1989-03-01

    The development of autonomous land vehicles (ALV) capable of operating in an unconstrained environment has proven to be a formidable research effort. The unpredictability of events in such an environment calls for the design of a robust perceptual system, an impossible task requiring the programming of a system bases on the expectation of future, unconstrained events. Hence, the need for a "general purpose" machine vision system that is capable of perceiving and understanding images in an unconstrained environment in real-time. The research undertaken at the UCLA Machine Perception Laboratory addresses this need by focusing on two specific issues: 1) the long term goals for machine vision research as a joint effort between the neurosciences and computer science; and 2) a framework for evaluating progress in machine vision. In the past, vision research has been carried out independently within different fields including neurosciences, psychology, computer science, and electrical engineering. Our interdisciplinary approach to vision research is based on the rigorous combination of computational neuroscience, as derived from neurophysiology and neuropsychology, with computer science and electrical engineering. The primary motivation behind our approach is that the human visual system is the only existing example of a "general purpose" vision system and using a neurally based computing substrate, it can complete all necessary visual tasks in real-time.

  11. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  12. a Fully Automated Pipeline for Classification Tasks with AN Application to Remote Sensing

    NASA Astrophysics Data System (ADS)

    Suzuki, K.; Claesen, M.; Takeda, H.; De Moor, B.

    2016-06-01

    Nowadays deep learning has been intensively in spotlight owing to its great victories at major competitions, which undeservedly pushed `shallow' machine learning methods, relatively naive/handy algorithms commonly used by industrial engineers, to the background in spite of their facilities such as small requisite amount of time/dataset for training. We, with a practical point of view, utilized shallow learning algorithms to construct a learning pipeline such that operators can utilize machine learning without any special knowledge, expensive computation environment, and a large amount of labelled data. The proposed pipeline automates a whole classification process, namely feature-selection, weighting features and the selection of the most suitable classifier with optimized hyperparameters. The configuration facilitates particle swarm optimization, one of well-known metaheuristic algorithms for the sake of generally fast and fine optimization, which enables us not only to optimize (hyper)parameters but also to determine appropriate features/classifier to the problem, which has conventionally been a priori based on domain knowledge and remained untouched or dealt with naïve algorithms such as grid search. Through experiments with the MNIST and CIFAR-10 datasets, common datasets in computer vision field for character recognition and object recognition problems respectively, our automated learning approach provides high performance considering its simple setting (i.e. non-specialized setting depending on dataset), small amount of training data, and practical learning time. Moreover, compared to deep learning the performance stays robust without almost any modification even with a remote sensing object recognition problem, which in turn indicates that there is a high possibility that our approach contributes to general classification problems.

  13. Computer vision for foreign body detection and removal in the food industry

    USDA-ARS?s Scientific Manuscript database

    Computer vision inspection systems are often used for quality control, product grading, defect detection and other product evaluation issues. This chapter focuses on the use of computer vision inspection systems that detect foreign bodies and remove them from the product stream. Specifically, we wi...

  14. Chapter 11. Quality evaluation of apple by computer vision

    USDA-ARS?s Scientific Manuscript database

    Apple is one of the most consumed fruits in the world, and there is a critical need for enhanced computer vision technology for quality assessment of apples. This chapter gives a comprehensive review on recent advances in various computer vision techniques for detecting surface and internal defects ...

  15. Deep Learning for Computer Vision: A Brief Review

    PubMed Central

    Doulamis, Nikolaos; Doulamis, Anastasios; Protopapadakis, Eftychios

    2018-01-01

    Over the last years deep learning methods have been shown to outperform previous state-of-the-art machine learning techniques in several fields, with computer vision being one of the most prominent cases. This review paper provides a brief overview of some of the most significant deep learning schemes used in computer vision problems, that is, Convolutional Neural Networks, Deep Boltzmann Machines and Deep Belief Networks, and Stacked Denoising Autoencoders. A brief account of their history, structure, advantages, and limitations is given, followed by a description of their applications in various computer vision tasks, such as object detection, face recognition, action and activity recognition, and human pose estimation. Finally, a brief overview is given of future directions in designing deep learning schemes for computer vision problems and the challenges involved therein. PMID:29487619

  16. An automated method for accurate vessel segmentation.

    PubMed

    Yang, Xin; Liu, Chaoyue; Le Minh, Hung; Wang, Zhiwei; Chien, Aichi; Cheng, Kwang-Ting Tim

    2017-05-07

    Vessel segmentation is a critical task for various medical applications, such as diagnosis assistance of diabetic retinopathy, quantification of cerebral aneurysm's growth, and guiding surgery in neurosurgical procedures. Despite technology advances in image segmentation, existing methods still suffer from low accuracy for vessel segmentation in the two challenging while common scenarios in clinical usage: (1) regions with a low signal-to-noise-ratio (SNR), and (2) at vessel boundaries disturbed by adjacent non-vessel pixels. In this paper, we present an automated system which can achieve highly accurate vessel segmentation for both 2D and 3D images even under these challenging scenarios. Three key contributions achieved by our system are: (1) a progressive contrast enhancement method to adaptively enhance contrast of challenging pixels that were otherwise indistinguishable, (2) a boundary refinement method to effectively improve segmentation accuracy at vessel borders based on Canny edge detection, and (3) a content-aware region-of-interests (ROI) adjustment method to automatically determine the locations and sizes of ROIs which contain ambiguous pixels and demand further verification. Extensive evaluation of our method is conducted on both 2D and 3D datasets. On a public 2D retinal dataset (named DRIVE (Staal 2004 IEEE Trans. Med. Imaging 23 501-9)) and our 2D clinical cerebral dataset, our approach achieves superior performance to the state-of-the-art methods including a vesselness based method (Frangi 1998 Int. Conf. on Medical Image Computing and Computer-Assisted Intervention) and an optimally oriented flux (OOF) based method (Law and Chung 2008 European Conf. on Computer Vision). An evaluation on 11 clinical 3D CTA cerebral datasets shows that our method can achieve 94% average accuracy with respect to the manual segmentation reference, which is 23% to 33% better than the five baseline methods (Yushkevich 2006 Neuroimage 31 1116-28; Law and Chung 2008 European Conf. on Computer Vision; Law and Chung 2009 IEEE Trans. Image Process. 18 596-612; Wang 2015 J. Neurosci. Methods 241 30-6) with manually optimized parameters. Our system has also been applied clinically for cerebral aneurysm development analysis. Experimental results on 10 patients' data, with two 3D CT scans per patient, show that our system's automatic diagnosis outcomes are consistent with clinicians' manual measurements.

  17. An automated method for accurate vessel segmentation

    NASA Astrophysics Data System (ADS)

    Yang, Xin; Liu, Chaoyue; Le Minh, Hung; Wang, Zhiwei; Chien, Aichi; (Tim Cheng, Kwang-Ting

    2017-05-01

    Vessel segmentation is a critical task for various medical applications, such as diagnosis assistance of diabetic retinopathy, quantification of cerebral aneurysm’s growth, and guiding surgery in neurosurgical procedures. Despite technology advances in image segmentation, existing methods still suffer from low accuracy for vessel segmentation in the two challenging while common scenarios in clinical usage: (1) regions with a low signal-to-noise-ratio (SNR), and (2) at vessel boundaries disturbed by adjacent non-vessel pixels. In this paper, we present an automated system which can achieve highly accurate vessel segmentation for both 2D and 3D images even under these challenging scenarios. Three key contributions achieved by our system are: (1) a progressive contrast enhancement method to adaptively enhance contrast of challenging pixels that were otherwise indistinguishable, (2) a boundary refinement method to effectively improve segmentation accuracy at vessel borders based on Canny edge detection, and (3) a content-aware region-of-interests (ROI) adjustment method to automatically determine the locations and sizes of ROIs which contain ambiguous pixels and demand further verification. Extensive evaluation of our method is conducted on both 2D and 3D datasets. On a public 2D retinal dataset (named DRIVE (Staal 2004 IEEE Trans. Med. Imaging 23 501-9)) and our 2D clinical cerebral dataset, our approach achieves superior performance to the state-of-the-art methods including a vesselness based method (Frangi 1998 Int. Conf. on Medical Image Computing and Computer-Assisted Intervention) and an optimally oriented flux (OOF) based method (Law and Chung 2008 European Conf. on Computer Vision). An evaluation on 11 clinical 3D CTA cerebral datasets shows that our method can achieve 94% average accuracy with respect to the manual segmentation reference, which is 23% to 33% better than the five baseline methods (Yushkevich 2006 Neuroimage 31 1116-28; Law and Chung 2008 European Conf. on Computer Vision; Law and Chung 2009 IEEE Trans. Image Process. 18 596-612; Wang 2015 J. Neurosci. Methods 241 30-6) with manually optimized parameters. Our system has also been applied clinically for cerebral aneurysm development analysis. Experimental results on 10 patients’ data, with two 3D CT scans per patient, show that our system’s automatic diagnosis outcomes are consistent with clinicians’ manual measurements.

  18. Interdisciplinary development of manual and automated product usability assessments for older adults with dementia: lessons learned.

    PubMed

    Boger, Jennifer; Taati, Babak; Mihailidis, Alex

    2016-10-01

    The changes in cognitive abilities that accompany dementia can make it difficult to use everyday products that are required to complete activities of daily living. Products that are inherently more usable for people with dementia could facilitate independent activity completion, thus reducing the need for caregiver assistance. The objectives of this research were to: (1) gain an understanding of how water tap design impacted tap usability and (2) create an automated computerized tool that could assess tap usability. 27 older adults, who ranged from cognitively intact to advanced dementia, completed 1309 trials on five tap designs. Data were manually analyzed to investigate tap usability as well as used to develop an automated usability analysis tool. Researchers collaborated to modify existing techniques and to create novel ones to accomplish both goals. This paper presents lessons learned through the course of this research, which could be applicable in the development of other usability studies, automated vision-based assessments and the development of assistive technologies for cognitively impaired older adults. Collaborative interdisciplinary teamwork, which included older adult with dementia participants, was key to enabling innovative advances that achieved the projects' research goals. Implications for Rehabilitation Products that are implicitly familiar and usable by older adults could foster independent activity completion, potentially reducing reliance on a caregiver. The computer-based automated tool can significantly reduce the time and effort required to perform product usability analysis, making this type of analysis more feasible. Interdisciplinary collaboration can result in a more holistic understanding of assistive technology research challenges and enable innovative solutions.

  19. A low-cost machine vision system for the recognition and sorting of small parts

    NASA Astrophysics Data System (ADS)

    Barea, Gustavo; Surgenor, Brian W.; Chauhan, Vedang; Joshi, Keyur D.

    2018-04-01

    An automated machine vision-based system for the recognition and sorting of small parts was designed, assembled and tested. The system was developed to address a need to expose engineering students to the issues of machine vision and assembly automation technology, with readily available and relatively low-cost hardware and software. This paper outlines the design of the system and presents experimental performance results. Three different styles of plastic gears, together with three different styles of defective gears, were used to test the system. A pattern matching tool was used for part classification. Nine experiments were conducted to demonstrate the effects of changing various hardware and software parameters, including: conveyor speed, gear feed rate, classification, and identification score thresholds. It was found that the system could achieve a maximum system accuracy of 95% at a feed rate of 60 parts/min, for a given set of parameter settings. Future work will be looking at the effect of lighting.

  20. An Automated Classification Technique for Detecting Defects in Battery Cells

    NASA Technical Reports Server (NTRS)

    McDowell, Mark; Gray, Elizabeth

    2006-01-01

    Battery cell defect classification is primarily done manually by a human conducting a visual inspection to determine if the battery cell is acceptable for a particular use or device. Human visual inspection is a time consuming task when compared to an inspection process conducted by a machine vision system. Human inspection is also subject to human error and fatigue over time. We present a machine vision technique that can be used to automatically identify defective sections of battery cells via a morphological feature-based classifier using an adaptive two-dimensional fast Fourier transformation technique. The initial area of interest is automatically classified as either an anode or cathode cell view as well as classified as an acceptable or a defective battery cell. Each battery cell is labeled and cataloged for comparison and analysis. The result is the implementation of an automated machine vision technique that provides a highly repeatable and reproducible method of identifying and quantifying defects in battery cells.

  1. Fast neuromimetic object recognition using FPGA outperforms GPU implementations.

    PubMed

    Orchard, Garrick; Martin, Jacob G; Vogelstein, R Jacob; Etienne-Cummings, Ralph

    2013-08-01

    Recognition of objects in still images has traditionally been regarded as a difficult computational problem. Although modern automated methods for visual object recognition have achieved steadily increasing recognition accuracy, even the most advanced computational vision approaches are unable to obtain performance equal to that of humans. This has led to the creation of many biologically inspired models of visual object recognition, among them the hierarchical model and X (HMAX) model. HMAX is traditionally known to achieve high accuracy in visual object recognition tasks at the expense of significant computational complexity. Increasing complexity, in turn, increases computation time, reducing the number of images that can be processed per unit time. In this paper we describe how the computationally intensive and biologically inspired HMAX model for visual object recognition can be modified for implementation on a commercial field-programmable aate Array, specifically the Xilinx Virtex 6 ML605 evaluation board with XC6VLX240T FPGA. We show that with minor modifications to the traditional HMAX model we can perform recognition on images of size 128 × 128 pixels at a rate of 190 images per second with a less than 1% loss in recognition accuracy in both binary and multiclass visual object recognition tasks.

  2. Electronic Data Interchange in Procurement

    DTIC Science & Technology

    1990-04-01

    contract management and order processing systems. This conversion of automated information to paper and back to automated form is not only slow and...automated purchasing computer and the contractor’s order processing computer through telephone lines, as illustrated in Figure 1-1. Computer-to-computer...into the contractor’s order processing or contract management system. This approach - converting automated information to paper and back to automated

  3. Machine Learning, deep learning and optimization in computer vision

    NASA Astrophysics Data System (ADS)

    Canu, Stéphane

    2017-03-01

    As quoted in the Large Scale Computer Vision Systems NIPS workshop, computer vision is a mature field with a long tradition of research, but recent advances in machine learning, deep learning, representation learning and optimization have provided models with new capabilities to better understand visual content. The presentation will go through these new developments in machine learning covering basic motivations, ideas, models and optimization in deep learning for computer vision, identifying challenges and opportunities. It will focus on issues related with large scale learning that is: high dimensional features, large variety of visual classes, and large number of examples.

  4. Using Vision and Speech Features for Automated Prediction of Performance Metrics in Multimodal Dialogs. Research Report. ETS RR-17-20

    ERIC Educational Resources Information Center

    Ramanarayanan, Vikram; Lange, Patrick; Evanini, Keelan; Molloy, Hillary; Tsuprun, Eugene; Qian, Yao; Suendermann-Oeft, David

    2017-01-01

    Predicting and analyzing multimodal dialog user experience (UX) metrics, such as overall call experience, caller engagement, and latency, among other metrics, in an ongoing manner is important for evaluating such systems. We investigate automated prediction of multiple such metrics collected from crowdsourced interactions with an open-source,…

  5. Evaluation of a multi-sensor machine vision system for automated hardwood lumber grading

    Treesearch

    D. Earl Kline; Chris Surak; Philip A. Araman

    2000-01-01

    Over the last 10 years, scientists at the Thomas M. Brooks Forest Products Center, the Bradley Department of Electrical Engineering, and the USDA Forest Service have been working on lumber scanning systems that can accurately locate and identify defects in hardwood lumber. Current R&D efforts are targeted toward developing automated lumber grading technologies. The...

  6. Vision Function in HIV-infected Individuals without Retinitis; Report of the Studies of Ocular Complications of AIDS Research Group

    PubMed Central

    Freeman, William R.; Van Natta, Mark L.; Jabs, Douglas; Sample, Pamela A.; Sadun, Alfredo A.; Thorne, Jennifer; Shah, Kayur H.; Holland, Gary N.

    2008-01-01

    Purpose To evaluate the prevalence and risk factors for vision loss in patients with clinical or immunologic AIDS without infectious retinitis. Design A prospective multicentered cohort study of patients with AIDS. Methods 1,351 patients (2,671 eyes) at 19 clinical trials centers diagnosed with AIDS but without major ocular complications of HIV. Standardized measurements of visual acuity, automated perimetry, and contrast sensitivity were analyzed and correlated with measurements of patients’ health and medical data relating to HIV infection. We evaluated correlations between vision function testing and HIV-related risk factors and medical testing. Results There were significant (p<0.05) associations between measures of decreasing vision function and indices of increasing disease severity including Karnofsky score and hemoglobin. A significant relationship was seen between low contrast sensitivity and decreasing levels of CD4+ T-cell count. Three percent of eyes had a visual acuity worse than 20/40 Snellen equivalents, which was significantly associated with a history of opportunistic infections and low Karnofsky score. When compared to external groups with normal vision, 39% of eyes had abnormal mean deviation on automated perimetry, 33% had abnormal pattern standard deviation, and 12% of eyes had low contrast sensitivity. Conclusions This study confirms that visual dysfunction is common in patients with AIDS but without retinitis. The most prevalent visual dysfunction is loss of visual field; nearly 40% of patients have some abnormal visual field. There is an association between general disease severity and less access to care and vision loss. The pathophysiology of this vision loss is unknown but is consistent with retinovascular disease or optic nerve disease. PMID:18191094

  7. Study on the Feasibility of RGB Substitute CIR for Automatic Removal Vegetation Occlusion Based on Ground Close-Range Building Images

    NASA Astrophysics Data System (ADS)

    Li, C.; Li, F.; Liu, Y.; Li, X.; Liu, P.; Xiao, B.

    2012-07-01

    Building 3D reconstruction based on ground remote sensing data (image, video and lidar) inevitably faces the problem that buildings are always occluded by vegetation, so how to automatically remove and repair vegetation occlusion is a very important preprocessing work for image understanding, compute vision and digital photogrammetry. In the traditional multispectral remote sensing which is achieved by aeronautics and space platforms, the Red and Near-infrared (NIR) bands, such as NDVI (Normalized Difference Vegetation Index), are useful to distinguish vegetation and clouds, amongst other targets. However, especially in the ground platform, CIR (Color Infra Red) is little utilized by compute vision and digital photogrammetry which usually only take true color RBG into account. Therefore whether CIR is necessary for vegetation segmentation or not has significance in that most of close-range cameras don't contain such NIR band. Moreover, the CIE L*a*b color space, which transform from RGB, seems not of much interest by photogrammetrists despite its powerfulness in image classification and analysis. So, CIE (L, a, b) feature and support vector machine (SVM) is suggested for vegetation segmentation to substitute for CIR. Finally, experimental results of visual effect and automation are given. The conclusion is that it's feasible to remove and segment vegetation occlusion without NIR band. This work should pave the way for texture reconstruction and repair for future 3D reconstruction.

  8. Techniques and potential capabilities of multi-resolutional information (knowledge) processing

    NASA Technical Reports Server (NTRS)

    Meystel, A.

    1989-01-01

    A concept of nested hierarchical (multi-resolutional, pyramidal) information (knowledge) processing is introduced for a variety of systems including data and/or knowledge bases, vision, control, and manufacturing systems, industrial automated robots, and (self-programmed) autonomous intelligent machines. A set of practical recommendations is presented using a case study of a multiresolutional object representation. It is demonstrated here that any intelligent module transforms (sometimes, irreversibly) the knowledge it deals with, and this tranformation affects the subsequent computation processes, e.g., those of decision and control. Several types of knowledge transformation are reviewed. Definite conditions are analyzed, satisfaction of which is required for organization and processing of redundant information (knowledge) in the multi-resolutional systems. Providing a definite degree of redundancy is one of these conditions.

  9. n-SIFT: n-dimensional scale invariant feature transform.

    PubMed

    Cheung, Warren; Hamarneh, Ghassan

    2009-09-01

    We propose the n-dimensional scale invariant feature transform (n-SIFT) method for extracting and matching salient features from scalar images of arbitrary dimensionality, and compare this method's performance to other related features. The proposed features extend the concepts used for 2-D scalar images in the computer vision SIFT technique for extracting and matching distinctive scale invariant features. We apply the features to images of arbitrary dimensionality through the use of hyperspherical coordinates for gradients and multidimensional histograms to create the feature vectors. We analyze the performance of a fully automated multimodal medical image matching technique based on these features, and successfully apply the technique to determine accurate feature point correspondence between pairs of 3-D MRI images and dynamic 3D + time CT data.

  10. 3-D Signal Processing in a Computer Vision System

    Treesearch

    Dongping Zhu; Richard W. Conners; Philip A. Araman

    1991-01-01

    This paper discusses the problem of 3-dimensional image filtering in a computer vision system that would locate and identify internal structural failure. In particular, a 2-dimensional adaptive filter proposed by Unser has been extended to 3-dimension. In conjunction with segmentation and labeling, the new filter has been used in the computer vision system to...

  11. An overview of computer vision

    NASA Technical Reports Server (NTRS)

    Gevarter, W. B.

    1982-01-01

    An overview of computer vision is provided. Image understanding and scene analysis are emphasized, and pertinent aspects of pattern recognition are treated. The basic approach to computer vision systems, the techniques utilized, applications, the current existing systems and state-of-the-art issues and research requirements, who is doing it and who is funding it, and future trends and expectations are reviewed.

  12. Experiences Using an Open Source Software Library to Teach Computer Vision Subjects

    ERIC Educational Resources Information Center

    Cazorla, Miguel; Viejo, Diego

    2015-01-01

    Machine vision is an important subject in computer science and engineering degrees. For laboratory experimentation, it is desirable to have a complete and easy-to-use tool. In this work we present a Java library, oriented to teaching computer vision. We have designed and built the library from the scratch with emphasis on readability and…

  13. Detecting Motion from a Moving Platform; Phase 3: Unification of Control and Sensing for More Advanced Situational Awareness

    DTIC Science & Technology

    2011-11-01

    RX-TY-TR-2011-0096-01) develops a novel computer vision sensor based upon the biological vision system of the common housefly , Musca domestica...01 summarizes the development of a novel computer vision sensor based upon the biological vision system of the common housefly , Musca domestica

  14. Assessment of Grating Acuity in Infants and Toddlers Using an Electronic Acuity Card: The Dobson Card.

    PubMed

    Mohan, Kathleen M; Miller, Joseph M; Harvey, Erin M; Gerhart, Kimberly D; Apple, Howard P; Apple, Deborah; Smith, Jordana M; Davis, Amy L; Leonard-Green, Tina; Campus, Irene; Dennis, Leslie K

    2016-01-01

    To determine if testing binocular visual acuity in infants and toddlers using the Acuity Card Procedure (ACP) with electronic grating stimuli yields clinically useful data. Participants were infants and toddlers ages 5 to 36.7 months referred by pediatricians due to failed automated vision screening. The ACP was used to test binocular grating acuity. Stimuli were presented on the Dobson Card. The Dobson Card consists of a handheld matte-black plexiglass frame with two flush-mounted tablet computers and is similar in size and form to commercially available printed grating acuity testing stimuli (Teller Acuity Cards II [TACII]; Stereo Optical, Inc., Chicago, IL). On each trial, one tablet displayed a square-wave grating and the other displayed a luminance-matched uniform gray patch. Stimuli were roughly equivalent to the stimuli available in the printed TACII stimuli. After acuity testing, each child received a cycloplegic eye examination. Based on cycloplegic retinoscopy, patients were categorized as having high or low refractive error per American Association for Pediatric Ophthalmology and Strabismus vision screening referral criteria. Mean acuities for high and low refractive error groups were compared using analysis of covariance, controlling for age. Mean visual acuity was significantly poorer in children with high refractive error than in those with low refractive error (P = .015). Electronic stimuli presented using the ACP can yield clinically useful measurements of grating acuity in infants and toddlers. Further research is needed to determine the optimal conditions and procedures for obtaining accurate and clinically useful automated measurements of visual acuity in infants and toddlers. Copyright 2016, SLACK Incorporated.

  15. Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades.

    PubMed

    Orchard, Garrick; Jayawant, Ajinkya; Cohen, Gregory K; Thakor, Nitish

    2015-01-01

    Creating datasets for Neuromorphic Vision is a challenging task. A lack of available recordings from Neuromorphic Vision sensors means that data must typically be recorded specifically for dataset creation rather than collecting and labeling existing data. The task is further complicated by a desire to simultaneously provide traditional frame-based recordings to allow for direct comparison with traditional Computer Vision algorithms. Here we propose a method for converting existing Computer Vision static image datasets into Neuromorphic Vision datasets using an actuated pan-tilt camera platform. Moving the sensor rather than the scene or image is a more biologically realistic approach to sensing and eliminates timing artifacts introduced by monitor updates when simulating motion on a computer monitor. We present conversion of two popular image datasets (MNIST and Caltech101) which have played important roles in the development of Computer Vision, and we provide performance metrics on these datasets using spike-based recognition algorithms. This work contributes datasets for future use in the field, as well as results from spike-based algorithms against which future works can compare. Furthermore, by converting datasets already popular in Computer Vision, we enable more direct comparison with frame-based approaches.

  16. Automated Point Cloud Correspondence Detection for Underwater Mapping Using AUVs

    NASA Technical Reports Server (NTRS)

    Hammond, Marcus; Clark, Ashley; Mahajan, Aditya; Sharma, Sumant; Rock, Stephen

    2015-01-01

    An algorithm for automating correspondence detection between point clouds composed of multibeam sonar data is presented. This allows accurate initialization for point cloud alignment techniques even in cases where accurate inertial navigation is not available, such as iceberg profiling or vehicles with low-grade inertial navigation systems. Techniques from computer vision literature are used to extract, label, and match keypoints between "pseudo-images" generated from these point clouds. Image matches are refined using RANSAC and information about the vehicle trajectory. The resulting correspondences can be used to initialize an iterative closest point (ICP) registration algorithm to estimate accumulated navigation error and aid in the creation of accurate, self-consistent maps. The results presented use multibeam sonar data obtained from multiple overlapping passes of an underwater canyon in Monterey Bay, California. Using strict matching criteria, the method detects 23 between-swath correspondence events in a set of 155 pseudo-images with zero false positives. Using less conservative matching criteria doubles the number of matches but introduces several false positive matches as well. Heuristics based on known vehicle trajectory information are used to eliminate these.

  17. Automated Mobile System for Accurate Outdoor Tree Crop Enumeration Using an Uncalibrated Camera.

    PubMed

    Nguyen, Thuy Tuong; Slaughter, David C; Hanson, Bradley D; Barber, Andrew; Freitas, Amy; Robles, Daniel; Whelan, Erin

    2015-07-28

    This paper demonstrates an automated computer vision system for outdoor tree crop enumeration in a seedling nursery. The complete system incorporates both hardware components (including an embedded microcontroller, an odometry encoder, and an uncalibrated digital color camera) and software algorithms (including microcontroller algorithms and the proposed algorithm for tree crop enumeration) required to obtain robust performance in a natural outdoor environment. The enumeration system uses a three-step image analysis process based upon: (1) an orthographic plant projection method integrating a perspective transform with automatic parameter estimation; (2) a plant counting method based on projection histograms; and (3) a double-counting avoidance method based on a homography transform. Experimental results demonstrate the ability to count large numbers of plants automatically with no human effort. Results show that, for tree seedlings having a height up to 40 cm and a within-row tree spacing of approximately 10 cm, the algorithms successfully estimated the number of plants with an average accuracy of 95.2% for trees within a single image and 98% for counting of the whole plant population in a large sequence of images.

  18. Automated Mobile System for Accurate Outdoor Tree Crop Enumeration Using an Uncalibrated Camera

    PubMed Central

    Nguyen, Thuy Tuong; Slaughter, David C.; Hanson, Bradley D.; Barber, Andrew; Freitas, Amy; Robles, Daniel; Whelan, Erin

    2015-01-01

    This paper demonstrates an automated computer vision system for outdoor tree crop enumeration in a seedling nursery. The complete system incorporates both hardware components (including an embedded microcontroller, an odometry encoder, and an uncalibrated digital color camera) and software algorithms (including microcontroller algorithms and the proposed algorithm for tree crop enumeration) required to obtain robust performance in a natural outdoor environment. The enumeration system uses a three-step image analysis process based upon: (1) an orthographic plant projection method integrating a perspective transform with automatic parameter estimation; (2) a plant counting method based on projection histograms; and (3) a double-counting avoidance method based on a homography transform. Experimental results demonstrate the ability to count large numbers of plants automatically with no human effort. Results show that, for tree seedlings having a height up to 40 cm and a within-row tree spacing of approximately 10 cm, the algorithms successfully estimated the number of plants with an average accuracy of 95.2% for trees within a single image and 98% for counting of the whole plant population in a large sequence of images. PMID:26225982

  19. Image-Based 3D Face Modeling System

    NASA Astrophysics Data System (ADS)

    Park, In Kyu; Zhang, Hui; Vezhnevets, Vladimir

    2005-12-01

    This paper describes an automatic system for 3D face modeling using frontal and profile images taken by an ordinary digital camera. The system consists of four subsystems including frontal feature detection, profile feature detection, shape deformation, and texture generation modules. The frontal and profile feature detection modules automatically extract the facial parts such as the eye, nose, mouth, and ear. The shape deformation module utilizes the detected features to deform the generic head mesh model such that the deformed model coincides with the detected features. A texture is created by combining the facial textures augmented from the input images and the synthesized texture and mapped onto the deformed generic head model. This paper provides a practical system for 3D face modeling, which is highly automated by aggregating, customizing, and optimizing a bunch of individual computer vision algorithms. The experimental results show a highly automated process of modeling, which is sufficiently robust to various imaging conditions. The whole model creation including all the optional manual corrections takes only 2[InlineEquation not available: see fulltext.]3 minutes.

  20. Deep Space Network information system architecture study

    NASA Technical Reports Server (NTRS)

    Beswick, C. A.; Markley, R. W. (Editor); Atkinson, D. J.; Cooper, L. P.; Tausworthe, R. C.; Masline, R. C.; Jenkins, J. S.; Crowe, R. A.; Thomas, J. L.; Stoloff, M. J.

    1992-01-01

    The purpose of this article is to describe an architecture for the Deep Space Network (DSN) information system in the years 2000-2010 and to provide guidelines for its evolution during the 1990s. The study scope is defined to be from the front-end areas at the antennas to the end users (spacecraft teams, principal investigators, archival storage systems, and non-NASA partners). The architectural vision provides guidance for major DSN implementation efforts during the next decade. A strong motivation for the study is an expected dramatic improvement in information-systems technologies, such as the following: computer processing, automation technology (including knowledge-based systems), networking and data transport, software and hardware engineering, and human-interface technology. The proposed Ground Information System has the following major features: unified architecture from the front-end area to the end user; open-systems standards to achieve interoperability; DSN production of level 0 data; delivery of level 0 data from the Deep Space Communications Complex, if desired; dedicated telemetry processors for each receiver; security against unauthorized access and errors; and highly automated monitor and control.

  1. Automated grading system for evaluation of ocular redness associated with dry eye.

    PubMed

    Rodriguez, John D; Johnston, Patrick R; Ousler, George W; Smith, Lisa M; Abelson, Mark B

    2013-01-01

    We have observed that dry eye redness is characterized by a prominence of fine horizontal conjunctival vessels in the exposed ocular surface of the interpalpebral fissure, and have incorporated this feature into the grading of redness in clinical studies of dry eye. To develop an automated method of grading dry eye-associated ocular redness in order to expand on the clinical grading system currently used. Ninety nine images from 26 dry eye subjects were evaluated by five graders using a 0-4 (in 0.5 increments) dry eye redness (Ora Calibra™ Dry Eye Redness Scale [OCDER]) scale. For the automated method, the Opencv computer vision library was used to develop software for calculating redness and horizontal conjunctival vessels (noted as "horizontality"). From original photograph, the region of interest (ROI) was selected manually using the open source ImageJ software. Total average redness intensity (Com-Red) was calculated as a single channel 8-bit image as R - 0.83G - 0.17B, where R, G and B were the respective intensities of the red, green and blue channels. The location of vessels was detected by normalizing the blue channel and selecting pixels with an intensity of less than 97% of the mean. The horizontal component (Com-Hor) was calculated by the first order Sobel derivative in the vertical direction and the score was calculated as the average blue channel image intensity of this vertical derivative. Pearson correlation coefficients, accuracy and concordance correlation coefficients (CCC) were calculated after regression and standardized regression of the dataset. The agreement (both Pearson's and CCC) among investigators using the OCDER scale was 0.67, while the agreement of investigator to computer was 0.76. A multiple regression using both redness and horizontality improved the agreement CCC from 0.66 and 0.69 to 0.76, demonstrating the contribution of vessel geometry to the overall grade. Computer analysis of a given image has 100% repeatability and zero variability from session to session. This objective means of grading ocular redness in a unified fashion has potential significance as a new clinical endpoint. In comparisons between computer and investigator, computer grading proved to be more reliable than another investigator using the OCDER scale. The best fitting model based on the present sample, and usable for future studies, was [Formula: see text] is the predicted investigator grade, and [Formula: see text] and [Formula: see text] are logarithmic transformations of the computer calculated parameters COM-Hor and COM-Red. Considering the superior repeatability, computer automated grading might be preferable to investigator grading in multicentered dry eye studies in which the subtle differences in redness incurred by treatment have been historically difficult to define.

  2. Vision-Based UAV Flight Control and Obstacle Avoidance

    DTIC Science & Technology

    2006-01-01

    denoted it by Vb = (Vb1, Vb2 , Vb3). Fig. 2 shows the block diagram of the proposed vision-based motion analysis and obstacle avoidance system. We denote...structure analysis often involve computation- intensive computer vision tasks, such as feature extraction and geometric modeling. Computation-intensive...First, we extract a set of features from each block. 2) Second, we compute the distance between these two sets of features. In conventional motion

  3. Automated decoding of facial expressions reveals marked differences in children when telling antisocial versus prosocial lies.

    PubMed

    Zanette, Sarah; Gao, Xiaoqing; Brunet, Megan; Bartlett, Marian Stewart; Lee, Kang

    2016-10-01

    The current study used computer vision technology to examine the nonverbal facial expressions of children (6-11years old) telling antisocial and prosocial lies. Children in the antisocial lying group completed a temptation resistance paradigm where they were asked not to peek at a gift being wrapped for them. All children peeked at the gift and subsequently lied about their behavior. Children in the prosocial lying group were given an undesirable gift and asked if they liked it. All children lied about liking the gift. Nonverbal behavior was analyzed using the Computer Expression Recognition Toolbox (CERT), which employs the Facial Action Coding System (FACS), to automatically code children's facial expressions while lying. Using CERT, children's facial expressions during antisocial and prosocial lying were accurately and reliably differentiated significantly above chance-level accuracy. The basic expressions of emotion that distinguished antisocial lies from prosocial lies were joy and contempt. Children expressed joy more in prosocial lying than in antisocial lying. Girls showed more joy and less contempt compared with boys when they told prosocial lies. Boys showed more contempt when they told prosocial lies than when they told antisocial lies. The key action units (AUs) that differentiate children's antisocial and prosocial lies are blink/eye closure, lip pucker, and lip raise on the right side. Together, these findings indicate that children's facial expressions differ while telling antisocial versus prosocial lies. The reliability of CERT in detecting such differences in facial expression suggests the viability of using computer vision technology in deception research. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. 3D vision system for intelligent milking robot automation

    NASA Astrophysics Data System (ADS)

    Akhloufi, M. A.

    2013-12-01

    In a milking robot, the correct localization and positioning of milking teat cups is of very high importance. The milking robots technology has not changed since a decade and is based primarily on laser profiles for teats approximate positions estimation. This technology has reached its limit and does not allow optimal positioning of the milking cups. Also, in the presence of occlusions, the milking robot fails to milk the cow. These problems, have economic consequences for producers and animal health (e.g. development of mastitis). To overcome the limitations of current robots, we have developed a new system based on 3D vision, capable of efficiently positioning the milking cups. A prototype of an intelligent robot system based on 3D vision for real-time positioning of a milking robot has been built and tested under various conditions on a synthetic udder model (in static and moving scenarios). Experimental tests, were performed using 3D Time-Of-Flight (TOF) and RGBD cameras. The proposed algorithms permit the online segmentation of teats by combing 2D and 3D visual information. The obtained results permit the teat 3D position computation. This information is then sent to the milking robot for teat cups positioning. The vision system has a real-time performance and monitors the optimal positioning of the cups even in the presence of motion. The obtained results, with both TOF and RGBD cameras, show the good performance of the proposed system. The best performance was obtained with RGBD cameras. This latter technology will be used in future real life experimental tests.

  5. Vision-Based Steering Control, Speed Assistance and Localization for Inner-City Vehicles

    PubMed Central

    Olivares-Mendez, Miguel Angel; Sanchez-Lopez, Jose Luis; Jimenez, Felipe; Campoy, Pascual; Sajadi-Alamdari, Seyed Amin; Voos, Holger

    2016-01-01

    Autonomous route following with road vehicles has gained popularity in the last few decades. In order to provide highly automated driver assistance systems, different types and combinations of sensors have been presented in the literature. However, most of these approaches apply quite sophisticated and expensive sensors, and hence, the development of a cost-efficient solution still remains a challenging problem. This work proposes the use of a single monocular camera sensor for an automatic steering control, speed assistance for the driver and localization of the vehicle on a road. Herein, we assume that the vehicle is mainly traveling along a predefined path, such as in public transport. A computer vision approach is presented to detect a line painted on the road, which defines the path to follow. Visual markers with a special design painted on the road provide information to localize the vehicle and to assist in its speed control. Furthermore, a vision-based control system, which keeps the vehicle on the predefined path under inner-city speed constraints, is also presented. Real driving tests with a commercial car on a closed circuit finally prove the applicability of the derived approach. In these tests, the car reached a maximum speed of 48 km/h and successfully traveled a distance of 7 km without the intervention of a human driver and any interruption. PMID:26978365

  6. Vision-Based Steering Control, Speed Assistance and Localization for Inner-City Vehicles.

    PubMed

    Olivares-Mendez, Miguel Angel; Sanchez-Lopez, Jose Luis; Jimenez, Felipe; Campoy, Pascual; Sajadi-Alamdari, Seyed Amin; Voos, Holger

    2016-03-11

    Autonomous route following with road vehicles has gained popularity in the last few decades. In order to provide highly automated driver assistance systems, different types and combinations of sensors have been presented in the literature. However, most of these approaches apply quite sophisticated and expensive sensors, and hence, the development of a cost-efficient solution still remains a challenging problem. This work proposes the use of a single monocular camera sensor for an automatic steering control, speed assistance for the driver and localization of the vehicle on a road. Herein, we assume that the vehicle is mainly traveling along a predefined path, such as in public transport. A computer vision approach is presented to detect a line painted on the road, which defines the path to follow. Visual markers with a special design painted on the road provide information to localize the vehicle and to assist in its speed control. Furthermore, a vision-based control system, which keeps the vehicle on the predefined path under inner-city speed constraints, is also presented. Real driving tests with a commercial car on a closed circuit finally prove the applicability of the derived approach. In these tests, the car reached a maximum speed of 48 km/h and successfully traveled a distance of 7 km without the intervention of a human driver and any interruption.

  7. Employment Opportunities for the Handicapped in Programmable Automation.

    ERIC Educational Resources Information Center

    Swift, Richard; Leneway, Robert

    A Computer Integrated Manufacturing System may make it possible for severely disabled people to custom design, machine, and manufacture either wood or metal parts. Programmable automation merges computer aided design, computer aided manufacturing, computer aided engineering, and computer integrated manufacturing systems with automated production…

  8. Heterogeneous compute in computer vision: OpenCL in OpenCV

    NASA Astrophysics Data System (ADS)

    Gasparakis, Harris

    2014-02-01

    We explore the relevance of Heterogeneous System Architecture (HSA) in Computer Vision, both as a long term vision, and as a near term emerging reality via the recently ratified OpenCL 2.0 Khronos standard. After a brief review of OpenCL 1.2 and 2.0, including HSA features such as Shared Virtual Memory (SVM) and platform atomics, we identify what genres of Computer Vision workloads stand to benefit by leveraging those features, and we suggest a new mental framework that replaces GPU compute with hybrid HSA APU compute. As a case in point, we discuss, in some detail, popular object recognition algorithms (part-based models), emphasizing the interplay and concurrent collaboration between the GPU and CPU. We conclude by describing how OpenCL has been incorporated in OpenCV, a popular open source computer vision library, emphasizing recent work on the Transparent API, to appear in OpenCV 3.0, which unifies the native CPU and OpenCL execution paths under a single API, allowing the same code to execute either on CPU or on a OpenCL enabled device, without even recompiling.

  9. Model-Independent Phenotyping of C. elegans Locomotion Using Scale-Invariant Feature Transform

    PubMed Central

    Koren, Yelena; Sznitman, Raphael; Arratia, Paulo E.; Carls, Christopher; Krajacic, Predrag; Brown, André E. X.; Sznitman, Josué

    2015-01-01

    To uncover the genetic basis of behavioral traits in the model organism C. elegans, a common strategy is to study locomotion defects in mutants. Despite efforts to introduce (semi-)automated phenotyping strategies, current methods overwhelmingly depend on worm-specific features that must be hand-crafted and as such are not generalizable for phenotyping motility in other animal models. Hence, there is an ongoing need for robust algorithms that can automatically analyze and classify motility phenotypes quantitatively. To this end, we have developed a fully-automated approach to characterize C. elegans’ phenotypes that does not require the definition of nematode-specific features. Rather, we make use of the popular computer vision Scale-Invariant Feature Transform (SIFT) from which we construct histograms of commonly-observed SIFT features to represent nematode motility. We first evaluated our method on a synthetic dataset simulating a range of nematode crawling gaits. Next, we evaluated our algorithm on two distinct datasets of crawling C. elegans with mutants affecting neuromuscular structure and function. Not only is our algorithm able to detect differences between strains, results capture similarities in locomotory phenotypes that lead to clustering that is consistent with expectations based on genetic relationships. Our proposed approach generalizes directly and should be applicable to other animal models. Such applicability holds promise for computational ethology as more groups collect high-resolution image data of animal behavior. PMID:25816290

  10. Automated tracking of animal posture and movement during exploration and sensory orientation behaviors.

    PubMed

    Gomez-Marin, Alex; Partoune, Nicolas; Stephens, Greg J; Louis, Matthieu; Brembs, Björn

    2012-01-01

    The nervous functions of an organism are primarily reflected in the behavior it is capable of. Measuring behavior quantitatively, at high-resolution and in an automated fashion provides valuable information about the underlying neural circuit computation. Accordingly, computer-vision applications for animal tracking are becoming a key complementary toolkit to genetic, molecular and electrophysiological characterization in systems neuroscience. We present Sensory Orientation Software (SOS) to measure behavior and infer sensory experience correlates. SOS is a simple and versatile system to track body posture and motion of single animals in two-dimensional environments. In the presence of a sensory landscape, tracking the trajectory of the animal's sensors and its postural evolution provides a quantitative framework to study sensorimotor integration. To illustrate the utility of SOS, we examine the orientation behavior of fruit fly larvae in response to odor, temperature and light gradients. We show that SOS is suitable to carry out high-resolution behavioral tracking for a wide range of organisms including flatworms, fishes and mice. Our work contributes to the growing repertoire of behavioral analysis tools for collecting rich and fine-grained data to draw and test hypothesis about the functioning of the nervous system. By providing open-access to our code and documenting the software design, we aim to encourage the adaptation of SOS by a wide community of non-specialists to their particular model organism and questions of interest.

  11. Quality grading of Atlantic salmon (Salmo salar) by computer vision.

    PubMed

    Misimi, E; Erikson, U; Skavhaug, A

    2008-06-01

    In this study, we present a promising method of computer vision-based quality grading of whole Atlantic salmon (Salmo salar). Using computer vision, it was possible to differentiate among different quality grades of Atlantic salmon based on the external geometrical information contained in the fish images. Initially, before the image acquisition, the fish were subjectively graded and labeled into grading classes by a qualified human inspector in the processing plant. Prior to classification, the salmon images were segmented into binary images, and then feature extraction was performed on the geometrical parameters of the fish from the grading classes. The classification algorithm was a threshold-based classifier, which was designed using linear discriminant analysis. The performance of the classifier was tested by using the leave-one-out cross-validation method, and the classification results showed a good agreement between the classification done by human inspectors and by the computer vision. The computer vision-based method classified correctly 90% of the salmon from the data set as compared with the classification by human inspector. Overall, it was shown that computer vision can be used as a powerful tool to grade Atlantic salmon into quality grades in a fast and nondestructive manner by a relatively simple classifier algorithm. The low cost of implementation of today's advanced computer vision solutions makes this method feasible for industrial purposes in fish plants as it can replace manual labor, on which grading tasks still rely.

  12. Simplifying applications software for vision guided robot implementation

    NASA Technical Reports Server (NTRS)

    Duncheon, Charlie

    1994-01-01

    A simple approach to robot applications software is described. The idea is to use commercially available software and hardware wherever possible to minimize system costs, schedules and risks. The U.S. has been slow in the adaptation of robots and flexible automation compared to the fluorishing growth of robot implementation in Japan. The U.S. can benefit from this approach because of a more flexible array of vision guided robot technologies.

  13. Automatic Inspection In Car Industry : User Point-Of-View

    NASA Astrophysics Data System (ADS)

    Salesse, Robert

    1986-11-01

    Many equipments for automatic inspection with vision have been incorporated in production lines of nearly all car manufacturers. RENAULT also has now a three years of experience with automated vision and some rules have been established. Our most important contributions have been : - Examples of applications, some now operating, some waiting for integration in complete systems. - How to establish a good "request to quote" ? - How to examine and compare suppliers'offers ? What selection criterias and important questions to ask for ? - What can be expected from the new vision equipment and what are the needs in hardware and software.

  14. Using parallel evolutionary development for a biologically-inspired computer vision system for mobile robots.

    PubMed

    Wright, Cameron H G; Barrett, Steven F; Pack, Daniel J

    2005-01-01

    We describe a new approach to attacking the problem of robust computer vision for mobile robots. The overall strategy is to mimic the biological evolution of animal vision systems. Our basic imaging sensor is based upon the eye of the common house fly, Musca domestica. The computational algorithms are a mix of traditional image processing, subspace techniques, and multilayer neural networks.

  15. Automated driving and autonomous functions on road vehicles

    NASA Astrophysics Data System (ADS)

    Gordon, T. J.; Lidberg, M.

    2015-07-01

    In recent years, road vehicle automation has become an important and popular topic for research and development in both academic and industrial spheres. New developments have received extensive coverage in the popular press, and it may be said that the topic has captured the public imagination. Indeed, the topic has generated interest across a wide range of academic, industry and governmental communities, well beyond vehicle engineering; these include computer science, transportation, urban planning, legal, social science and psychology. While this follows a similar surge of interest - and subsequent hiatus - of Automated Highway Systems in the 1990s, the current level of interest is substantially greater, and current expectations are high. It is common to frame the new technologies under the banner of 'self-driving cars' - robotic systems potentially taking over the entire role of the human driver, a capability that does not fully exist at present. However, this single vision leads one to ignore the existing range of automated systems that are both feasible and useful. Recent developments are underpinned by substantial and long-term trends in 'computerisation' of the automobile, with developments in sensors, actuators and control technologies to spur the new developments in both industry and academia. In this paper, we review the evolution of the intelligent vehicle and the supporting technologies with a focus on the progress and key challenges for vehicle system dynamics. A number of relevant themes around driving automation are explored in this article, with special focus on those most relevant to the underlying vehicle system dynamics. One conclusion is that increased precision is needed in sensing and controlling vehicle motions, a trend that can mimic that of the aerospace industry, and similarly benefit from increased use of redundant by-wire actuators.

  16. Automated CD-SEM recipe creation technology for mass production using CAD data

    NASA Astrophysics Data System (ADS)

    Kawahara, Toshikazu; Yoshida, Masamichi; Tanaka, Masashi; Ido, Sanyu; Nakano, Hiroyuki; Adachi, Naokaka; Abe, Yuichi; Nagatomo, Wataru

    2011-03-01

    Critical Dimension Scanning Electron Microscope (CD-SEM) recipe creation needs sample preparation necessary for matching pattern registration, and recipe creation on CD-SEM using the sample, which hinders the reduction in test production cost and time in semiconductor manufacturing factories. From the perspective of cost reduction and improvement of the test production efficiency, automated CD-SEM recipe creation without the sample preparation and the manual operation has been important in the production lines. For the automated CD-SEM recipe creation, we have introduced RecipeDirector (RD) that enables the recipe creation by using Computer-Aided Design (CAD) data and text data that includes measurement information. We have developed a system that automatically creates the CAD data and the text data necessary for the recipe creation on RD; and, for the elimination of the manual operation, we have enhanced RD so that all measurement information can be specified in the text data. As a result, we have established an automated CD-SEM recipe creation system without the sample preparation and the manual operation. For the introduction of the CD-SEM recipe creation system using RD to the production lines, the accuracy of the pattern matching was an issue. The shape of design templates for the matching created from the CAD data was different from that of SEM images in vision. Thus, a development of robust pattern matching algorithm that considers the shape difference was needed. The addition of image processing of the templates for the matching and shape processing of the CAD patterns in the lower layer has enabled the robust pattern matching. This paper describes the automated CD-SEM recipe creation technology for the production lines without the sample preparation and the manual operation using RD applied in Sony Semiconductor Kyusyu Corporation Kumamoto Technology Center (SCK Corporation Kumamoto TEC).

  17. Progress in computer vision.

    NASA Astrophysics Data System (ADS)

    Jain, A. K.; Dorai, C.

    Computer vision has emerged as a challenging and important area of research, both as an engineering and a scientific discipline. The growing importance of computer vision is evident from the fact that it was identified as one of the "Grand Challenges" and also from its prominent role in the National Information Infrastructure. While the design of a general-purpose vision system continues to be elusive machine vision systems are being used successfully in specific application elusive, machine vision systems are being used successfully in specific application domains. Building a practical vision system requires a careful selection of appropriate sensors, extraction and integration of information from available cues in the sensed data, and evaluation of system robustness and performance. The authors discuss and demonstrate advantages of (1) multi-sensor fusion, (2) combination of features and classifiers, (3) integration of visual modules, and (IV) admissibility and goal-directed evaluation of vision algorithms. The requirements of several prominent real world applications such as biometry, document image analysis, image and video database retrieval, and automatic object model construction offer exciting problems and new opportunities to design and evaluate vision algorithms.

  18. Can Humans Fly Action Understanding with Multiple Classes of Actors

    DTIC Science & Technology

    2015-06-08

    recognition using structure from motion point clouds. In European Conference on Computer Vision, 2008. [5] R. Caruana. Multitask learning. Machine Learning...tonomous driving ? the kitti vision benchmark suite. In IEEE Conference on Computer Vision and Pattern Recognition, 2012. [12] L. Gorelick, M. Blank

  19. Quantitative comparison and evaluation of software packages for assessment of abdominal adipose tissue distribution by magnetic resonance imaging.

    PubMed

    Bonekamp, S; Ghosh, P; Crawford, S; Solga, S F; Horska, A; Brancati, F L; Diehl, A M; Smith, S; Clark, J M

    2008-01-01

    To examine five available software packages for the assessment of abdominal adipose tissue with magnetic resonance imaging, compare their features and assess the reliability of measurement results. Feature evaluation and test-retest reliability of softwares (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision) used in manual, semi-automated or automated segmentation of abdominal adipose tissue. A random sample of 15 obese adults with type 2 diabetes. Axial T1-weighted spin echo images centered at vertebral bodies of L2-L3 were acquired at 1.5 T. Five software packages were evaluated (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision), comparing manual, semi-automated and automated segmentation approaches. Images were segmented into cross-sectional area (CSA), and the areas of visceral (VAT) and subcutaneous adipose tissue (SAT). Ease of learning and use and the design of the graphical user interface (GUI) were rated. Intra-observer accuracy and agreement between the software packages were calculated using intra-class correlation. Intra-class correlation coefficient was used to obtain test-retest reliability. Three of the five evaluated programs offered a semi-automated technique to segment the images based on histogram values or a user-defined threshold. One software package allowed manual delineation only. One fully automated program demonstrated the drawbacks of uncritical automated processing. The semi-automated approaches reduced variability and measurement error, and improved reproducibility. There was no significant difference in the intra-observer agreement in SAT and CSA. The VAT measurements showed significantly lower test-retest reliability. There were some differences between the software packages in qualitative aspects, such as user friendliness. Four out of five packages provided essentially the same results with respect to the inter- and intra-rater reproducibility. Our results using SliceOmatic, Analyze or NIHImage were comparable and could be used interchangeably. Newly developed fully automated approaches should be compared to one of the examined software packages.

  20. Quantitative comparison and evaluation of software packages for assessment of abdominal adipose tissue distribution by magnetic resonance imaging

    PubMed Central

    Bonekamp, S; Ghosh, P; Crawford, S; Solga, SF; Horska, A; Brancati, FL; Diehl, AM; Smith, S; Clark, JM

    2009-01-01

    Objective To examine five available software packages for the assessment of abdominal adipose tissue with magnetic resonance imaging, compare their features and assess the reliability of measurement results. Design Feature evaluation and test–retest reliability of softwares (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision) used in manual, semi-automated or automated segmentation of abdominal adipose tissue. Subjects A random sample of 15 obese adults with type 2 diabetes. Measurements Axial T1-weighted spin echo images centered at vertebral bodies of L2–L3 were acquired at 1.5 T. Five software packages were evaluated (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision), comparing manual, semi-automated and automated segmentation approaches. Images were segmented into cross-sectional area (CSA), and the areas of visceral (VAT) and subcutaneous adipose tissue (SAT). Ease of learning and use and the design of the graphical user interface (GUI) were rated. Intra-observer accuracy and agreement between the software packages were calculated using intra-class correlation. Intra-class correlation coefficient was used to obtain test–retest reliability. Results Three of the five evaluated programs offered a semi-automated technique to segment the images based on histogram values or a user-defined threshold. One software package allowed manual delineation only. One fully automated program demonstrated the drawbacks of uncritical automated processing. The semi-automated approaches reduced variability and measurement error, and improved reproducibility. There was no significant difference in the intra-observer agreement in SAT and CSA. The VAT measurements showed significantly lower test–retest reliability. There were some differences between the software packages in qualitative aspects, such as user friendliness. Conclusion Four out of five packages provided essentially the same results with respect to the inter- and intra-rater reproducibility. Our results using SliceOmatic, Analyze or NIHImage were comparable and could be used interchangeably. Newly developed fully automated approaches should be compared to one of the examined software packages. PMID:17700582

  1. Computer vision in cell biology.

    PubMed

    Danuser, Gaudenz

    2011-11-23

    Computer vision refers to the theory and implementation of artificial systems that extract information from images to understand their content. Although computers are widely used by cell biologists for visualization and measurement, interpretation of image content, i.e., the selection of events worth observing and the definition of what they mean in terms of cellular mechanisms, is mostly left to human intuition. This Essay attempts to outline roles computer vision may play and should play in image-based studies of cellular life. Copyright © 2011 Elsevier Inc. All rights reserved.

  2. Deep convolutional networks for automated detection of posterior-element fractures on spine CT

    NASA Astrophysics Data System (ADS)

    Roth, Holger R.; Wang, Yinong; Yao, Jianhua; Lu, Le; Burns, Joseph E.; Summers, Ronald M.

    2016-03-01

    Injuries of the spine, and its posterior elements in particular, are a common occurrence in trauma patients, with potentially devastating consequences. Computer-aided detection (CADe) could assist in the detection and classification of spine fractures. Furthermore, CAD could help assess the stability and chronicity of fractures, as well as facilitate research into optimization of treatment paradigms. In this work, we apply deep convolutional networks (ConvNets) for the automated detection of posterior element fractures of the spine. First, the vertebra bodies of the spine with its posterior elements are segmented in spine CT using multi-atlas label fusion. Then, edge maps of the posterior elements are computed. These edge maps serve as candidate regions for predicting a set of probabilities for fractures along the image edges using ConvNets in a 2.5D fashion (three orthogonal patches in axial, coronal and sagittal planes). We explore three different methods for training the ConvNet using 2.5D patches along the edge maps of `positive', i.e. fractured posterior-elements and `negative', i.e. non-fractured elements. An experienced radiologist retrospectively marked the location of 55 displaced posterior-element fractures in 18 trauma patients. We randomly split the data into training and testing cases. In testing, we achieve an area-under-the-curve of 0.857. This corresponds to 71% or 81% sensitivities at 5 or 10 false-positives per patient, respectively. Analysis of our set of trauma patients demonstrates the feasibility of detecting posterior-element fractures in spine CT images using computer vision techniques such as deep convolutional networks.

  3. Computer Vision Syndrome.

    PubMed

    Randolph, Susan A

    2017-07-01

    With the increased use of electronic devices with visual displays, computer vision syndrome is becoming a major public health issue. Improving the visual status of workers using computers results in greater productivity in the workplace and improved visual comfort.

  4. Implementation of a robotic flexible assembly system

    NASA Technical Reports Server (NTRS)

    Benton, Ronald C.

    1987-01-01

    As part of the Intelligent Task Automation program, a team developed enabling technologies for programmable, sensory controlled manipulation in unstructured environments. These technologies include 2-D/3-D vision sensing and understanding, force sensing and high speed force control, 2.5-D vision alignment and control, and multiple processor architectures. The subsequent design of a flexible, programmable, sensor controlled robotic assembly system for small electromechanical devices is described using these technologies and ongoing implementation and integration efforts. Using vision, the system picks parts dumped randomly in a tray. Using vision and force control, it performs high speed part mating, in-process monitoring/verification of expected results and autonomous recovery from some errors. It is programmed off line with semiautomatic action planning.

  5. Measuring exertion time, duty cycle and hand activity level for industrial tasks using computer vision.

    PubMed

    Akkas, Oguz; Lee, Cheng Hsien; Hu, Yu Hen; Harris Adamson, Carisa; Rempel, David; Radwin, Robert G

    2017-12-01

    Two computer vision algorithms were developed to automatically estimate exertion time, duty cycle (DC) and hand activity level (HAL) from videos of workers performing 50 industrial tasks. The average DC difference between manual frame-by-frame analysis and the computer vision DC was -5.8% for the Decision Tree (DT) algorithm, and 1.4% for the Feature Vector Training (FVT) algorithm. The average HAL difference was 0.5 for the DT algorithm and 0.3 for the FVT algorithm. A sensitivity analysis, conducted to examine the influence that deviations in DC have on HAL, found it remained unaffected when DC error was less than 5%. Thus, a DC error less than 10% will impact HAL less than 0.5 HAL, which is negligible. Automatic computer vision HAL estimates were therefore comparable to manual frame-by-frame estimates. Practitioner Summary: Computer vision was used to automatically estimate exertion time, duty cycle and hand activity level from videos of workers performing industrial tasks.

  6. A Novel Interdisciplinary Approach to Socio-Technical Complexity

    NASA Astrophysics Data System (ADS)

    Bassetti, Chiara

    The chapter presents a novel interdisciplinary approach that integrates micro-sociological analysis into computer-vision and pattern-recognition modeling and algorithms, the purpose being to tackle socio-technical complexity at a systemic yet micro-grounded level. The approach is empirically-grounded and both theoretically- and analytically-driven, yet systemic and multidimensional, semi-supervised and computable, and oriented towards large scale applications. The chapter describes the proposed approach especially as for its sociological foundations, and as applied to the analysis of a particular setting --i.e. sport-spectator crowds. Crowds, better defined as large gatherings, are almost ever-present in our societies, and capturing their dynamics is crucial. From social sciences to public safety management and emergency response, modeling and predicting large gatherings' presence and dynamics, thus possibly preventing critical situations and being able to properly react to them, is fundamental. This is where semi/automated technologies can make the difference. The work presented in this chapter is intended as a scientific step towards such an objective.

  7. Passive range estimation for rotorcraft low-altitude flight

    NASA Technical Reports Server (NTRS)

    Sridhar, B.; Suorsa, R.; Hussien, B.

    1991-01-01

    The automation of rotorcraft low-altitude flight presents challenging problems in control, computer vision and image understanding. A critical element in this problem is the ability to detect and locate obstacles, using on-board sensors, and modify the nominal trajectory. This requirement is also necessary for the safe landing of an autonomous lander on Mars. This paper examines some of the issues in the location of objects using a sequence of images from a passive sensor, and describes a Kalman filter approach to estimate the range to obstacles. The Kalman filter is also used to track features in the images leading to a significant reduction of search effort in the feature extraction step of the algorithm. The method can compute range for both straight line and curvilinear motion of the sensor. A laboratory experiment was designed to acquire a sequence of images along with sensor motion parameters under conditions similar to helicopter flight. Range estimation results using this imagery are presented.

  8. Evolution of the 3-dimensional video system for facial motion analysis: ten years' experiences and recent developments.

    PubMed

    Tzou, Chieh-Han John; Pona, Igor; Placheta, Eva; Hold, Alina; Michaelidou, Maria; Artner, Nicole; Kropatsch, Walter; Gerber, Hans; Frey, Manfred

    2012-08-01

    Since the implementation of the computer-aided system for assessing facial palsy in 1999 by Frey et al (Plast Reconstr Surg. 1999;104:2032-2039), no similar system that can make an objective, three-dimensional, quantitative analysis of facial movements has been marketed. This system has been in routine use since its launch, and it has proven to be reliable, clinically applicable, and therapeutically accurate. With the cooperation of international partners, more than 200 patients were analyzed. Recent developments in computer vision--mostly in the area of generative face models, applying active--appearance models (and extensions), optical flow, and video-tracking-have been successfully incorporated to automate the prototype system. Further market-ready development and a business partner will be needed to enable the production of this system to enhance clinical methodology in diagnostic and prognostic accuracy as a personalized therapy concept, leading to better results and higher quality of life for patients with impaired facial function.

  9. Reconfigurable vision system for real-time applications

    NASA Astrophysics Data System (ADS)

    Torres-Huitzil, Cesar; Arias-Estrada, Miguel

    2002-03-01

    Recently, a growing community of researchers has used reconfigurable systems to solve computationally intensive problems. Reconfigurability provides optimized processors for systems on chip designs, and makes easy to import technology to a new system through reusable modules. The main objective of this work is the investigation of a reconfigurable computer system targeted for computer vision and real-time applications. The system is intended to circumvent the inherent computational load of most window-based computer vision algorithms. It aims to build a system for such tasks by providing an FPGA-based hardware architecture for task specific vision applications with enough processing power, using the minimum amount of hardware resources as possible, and a mechanism for building systems using this architecture. Regarding the software part of the system, a library of pre-designed and general-purpose modules that implement common window-based computer vision operations is being investigated. A common generic interface is established for these modules in order to define hardware/software components. These components can be interconnected to develop more complex applications, providing an efficient mechanism for transferring image and result data among modules. Some preliminary results are presented and discussed.

  10. Feasibility Study of a Vision-Based Landing System for Unmanned Fixed-Wing Aircraft

    DTIC Science & Technology

    2017-06-01

    International Journal of Computer Science and Network Security 7 no. 3: 112–117. Accessed April 7, 2017. http://www.sciencedirect.com/science/ article /pii...the feasibility of applying computer vision techniques and visual feedback in the control loop for an autonomous system. This thesis examines the...integration into an autonomous aircraft control system. 14. SUBJECT TERMS autonomous systems, auto-land, computer vision, image processing

  11. Surpassing Humans and Computers with JellyBean: Crowd-Vision-Hybrid Counting Algorithms.

    PubMed

    Sarma, Akash Das; Jain, Ayush; Nandi, Arnab; Parameswaran, Aditya; Widom, Jennifer

    2015-11-01

    Counting objects is a fundamental image processisng primitive, and has many scientific, health, surveillance, security, and military applications. Existing supervised computer vision techniques typically require large quantities of labeled training data, and even with that, fail to return accurate results in all but the most stylized settings. Using vanilla crowd-sourcing, on the other hand, can lead to significant errors, especially on images with many objects. In this paper, we present our JellyBean suite of algorithms, that combines the best of crowds and computer vision to count objects in images, and uses judicious decomposition of images to greatly improve accuracy at low cost. Our algorithms have several desirable properties: (i) they are theoretically optimal or near-optimal , in that they ask as few questions as possible to humans (under certain intuitively reasonable assumptions that we justify in our paper experimentally); (ii) they operate under stand-alone or hybrid modes, in that they can either work independent of computer vision algorithms, or work in concert with them, depending on whether the computer vision techniques are available or useful for the given setting; (iii) they perform very well in practice, returning accurate counts on images that no individual worker or computer vision algorithm can count correctly, while not incurring a high cost.

  12. Comparison of human expert and computer-automated systems using magnitude-squared coherence (MSC) and bootstrap distribution statistics for the interpretation of pattern electroretinograms (PERGs) in infants with optic nerve hypoplasia (ONH).

    PubMed

    Fisher, Anthony C; McCulloch, Daphne L; Borchert, Mark S; Garcia-Filion, Pamela; Fink, Cassandra; Eleuteri, Antonio; Simpson, David M

    2015-08-01

    Pattern electroretinograms (PERGs) have inherently low signal-to-noise ratios and can be difficult to detect when degraded by pathology or noise. We compare an objective system for automated PERG analysis with expert human interpretation in children with optic nerve hypoplasia (ONH) with PERGs ranging from clear to undetectable. PERGs were recorded uniocularly with chloral hydrate sedation in children with ONH (aged 3.5-35 months). Stimuli were reversing checks of four sizes focused using an optical system incorporating the cycloplegic refraction. Forty PERG records were analysed; 20 selected at random and 20 from eyes with good vision (fellow eyes or eyes with mild ONH) from over 300 records. Two experts identified P50 and N95 of the PERGs after manually deleting trials with movement artefact, slow-wave EEG (4-8 Hz) or other noise from raw data for 150 check reversals. The automated system first identified present/not-present responses using a magnitude-squared coherence criterion and then, for responses confirmed as present, estimated the P50 and N95 cardinal positions as the turning points in local third-order polynomials fitted in the -3 dB bandwidth [0.25 … 45] Hz. Confidence limits were estimated from bootstrap re-sampling with replacement. The automated system uses an interactive Internet-available webpage tool (see http://clinengnhs.liv.ac.uk/esp_perg_1.htm). The automated system detected 28 PERG signals above the noise level (p ≤ 0.05 for H0). Good subjective quality ratings were indicative of significant PERGs; however, poor subjective quality did not necessarily predict non-significant signals. P50 and N95 implicit times showed good agreement between the two experts and between experts and the automated system. For the N95 amplitude measured to P50, the experts differed by an average of 13% consistent with differing interpretations of peaks within noise, while the automated amplitude measure was highly correlated with the expert measures but was proportionally larger. Trial-by-trial review of these data required approximately 6.5 h for each human expert, while automated data processing required <4 min, excluding overheads relating to data transfer. An automated computer system for PERG analysis, using a panel of signal processing and statistical techniques, provides objective present/not-present detection and cursor positioning with explicit confidence intervals. The system achieves, within an efficient and robust statistical framework, estimates of P50 and N95 amplitudes and implicit times similar to those of clinical experts.

  13. Dynamic Vision for Control

    DTIC Science & Technology

    2006-07-27

    unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT The goal of this project was to develop analytical and computational tools to make vision a Viable sensor for...vision.ucla. edu July 27, 2006 Abstract The goal of this project was to develop analytical and computational tools to make vision a viable sensor for the ... sensors . We have proposed the framework of stereoscopic segmentation where multiple images of the same obejcts were jointly processed to extract geometry

  14. Face recognition in newly hatched chicks at the onset of vision.

    PubMed

    Wood, Samantha M W; Wood, Justin N

    2015-04-01

    How does face recognition emerge in the newborn brain? To address this question, we used an automated controlled-rearing method with a newborn animal model: the domestic chick (Gallus gallus). This automated method allowed us to examine chicks' face recognition abilities at the onset of both face experience and object experience. In the first week of life, newly hatched chicks were raised in controlled-rearing chambers that contained no objects other than a single virtual human face. In the second week of life, we used an automated forced-choice testing procedure to examine whether chicks could distinguish that familiar face from a variety of unfamiliar faces. Chicks successfully distinguished the familiar face from most of the unfamiliar faces-for example, chicks were sensitive to changes in the face's age, gender, and orientation (upright vs. inverted). Thus, chicks can build an accurate representation of the first face they see in their life. These results show that the initial state of face recognition is surprisingly powerful: Newborn visual systems can begin encoding and recognizing faces at the onset of vision. (c) 2015 APA, all rights reserved).

  15. Sensor control of robot arc welding

    NASA Technical Reports Server (NTRS)

    Sias, F. R., Jr.

    1985-01-01

    A basic problem in the application of robots for welding which is how to guide a torch along a weld seam using sensory information was studied. Improvement of the quality and consistency of certain Gas Tungsten Arc welds on the Space Shuttle Main Engine (SSME) that are too complex geometrically for conventional automation and therefore are done by hand was examined. The particular problems associated with space shuttle main egnine (SSME) manufacturing and weld-seam tracking with an emphasis on computer vision methods were analyzed. Special interface software for the MINC computr are developed which will allow it to be used both as a test system to check out the robot interface software and later as a development tool for further investigation of sensory systems to be incorporated in welding procedures.

  16. Background estimation and player detection in badminton video clips using histogram of pixel values along temporal dimension

    NASA Astrophysics Data System (ADS)

    Peng, Yahui; Ma, Xiao; Gao, Xinyu; Zhou, Fangxu

    2015-12-01

    Computer vision is an important tool for sports video processing. However, its application in badminton match analysis is very limited. In this study, we proposed a straightforward but robust histogram-based background estimation and player detection methods for badminton video clips, and compared the results with the naive averaging method and the mixture of Gaussians methods, respectively. The proposed method yielded better background estimation results than the naive averaging method and more accurate player detection results than the mixture of Gaussians player detection method. The preliminary results indicated that the proposed histogram-based method could estimate the background and extract the players accurately. We conclude that the proposed method can be used for badminton player tracking and further studies are warranted for automated match analysis.

  17. Automated site characterization for robotic sample acquisition systems

    NASA Astrophysics Data System (ADS)

    Scholl, Marija S.; Eberlein, Susan J.

    1993-04-01

    A mobile, semiautonomous vehicle with multiple sensors and on-board intelligence is proposed for performing preliminary scientific investigations on extraterrestrial bodies prior to human exploration. Two technologies, a hybrid optical-digital computer system based on optical correlator technology and an image and instrument data analysis system, provide complementary capabilities that might be part of an instrument package for an intelligent robotic vehicle. The hybrid digital-optical vision system could perform real-time image classification tasks using an optical correlator with programmable matched filters under control of a digital microcomputer. The data analysis system would analyze visible and multiband imagery to extract mineral composition and textural information for geologic characterization. Together these technologies would support the site characterization needs of a robotic vehicle for both navigational and scientific purposes.

  18. Computer vision camera with embedded FPGA processing

    NASA Astrophysics Data System (ADS)

    Lecerf, Antoine; Ouellet, Denis; Arias-Estrada, Miguel

    2000-03-01

    Traditional computer vision is based on a camera-computer system in which the image understanding algorithms are embedded in the computer. To circumvent the computational load of vision algorithms, low-level processing and imaging hardware can be integrated in a single compact module where a dedicated architecture is implemented. This paper presents a Computer Vision Camera based on an open architecture implemented in an FPGA. The system is targeted to real-time computer vision tasks where low level processing and feature extraction tasks can be implemented in the FPGA device. The camera integrates a CMOS image sensor, an FPGA device, two memory banks, and an embedded PC for communication and control tasks. The FPGA device is a medium size one equivalent to 25,000 logic gates. The device is connected to two high speed memory banks, an IS interface, and an imager interface. The camera can be accessed for architecture programming, data transfer, and control through an Ethernet link from a remote computer. A hardware architecture can be defined in a Hardware Description Language (like VHDL), simulated and synthesized into digital structures that can be programmed into the FPGA and tested on the camera. The architecture of a classical multi-scale edge detection algorithm based on a Laplacian of Gaussian convolution has been developed to show the capabilities of the system.

  19. Automated Data Processing as an AI Planning Problem

    NASA Technical Reports Server (NTRS)

    Golden, Keith; Pang, Wanlin; Nemani, Ramakrishna; Votava, Petr

    2003-01-01

    NASA s vision for Earth Science is to build a "sensor web"; an adaptive array of heterogeneous satellites and other sensors that will track important events, such as storms, and provide real-time information about the state of the Earth to a wide variety of customers. Achieving his vision will require automation not only in the scheduling of the observations but also in the processing af tee resulting data. Ta address this need, we have developed a planner-based agent to automatically generate and execute data-flow programs to produce the requested data products. Data processing domains are substantially different from other planning domains that have been explored, and this has led us to substantially different choices in terms of representation and algorithms. We discuss some of these differences and discuss the approach we have adopted.

  20. Improving automated 3D reconstruction methods via vision metrology

    NASA Astrophysics Data System (ADS)

    Toschi, Isabella; Nocerino, Erica; Hess, Mona; Menna, Fabio; Sargeant, Ben; MacDonald, Lindsay; Remondino, Fabio; Robson, Stuart

    2015-05-01

    This paper aims to provide a procedure for improving automated 3D reconstruction methods via vision metrology. The 3D reconstruction problem is generally addressed using two different approaches. On the one hand, vision metrology (VM) systems try to accurately derive 3D coordinates of few sparse object points for industrial measurement and inspection applications; on the other, recent dense image matching (DIM) algorithms are designed to produce dense point clouds for surface representations and analyses. This paper strives to demonstrate a step towards narrowing the gap between traditional VM and DIM approaches. Efforts are therefore intended to (i) test the metric performance of the automated photogrammetric 3D reconstruction procedure, (ii) enhance the accuracy of the final results and (iii) obtain statistical indicators of the quality achieved in the orientation step. VM tools are exploited to integrate their main functionalities (centroid measurement, photogrammetric network adjustment, precision assessment, etc.) into the pipeline of 3D dense reconstruction. Finally, geometric analyses and accuracy evaluations are performed on the raw output of the matching (i.e. the point clouds) by adopting a metrological approach. The latter is based on the use of known geometric shapes and quality parameters derived from VDI/VDE guidelines. Tests are carried out by imaging the calibrated Portable Metric Test Object, designed and built at University College London (UCL), UK. It allows assessment of the performance of the image orientation and matching procedures within a typical industrial scenario, characterised by poor texture and known 3D/2D shapes.

  1. High precision automated face localization in thermal images: oral cancer dataset as test case

    NASA Astrophysics Data System (ADS)

    Chakraborty, M.; Raman, S. K.; Mukhopadhyay, S.; Patsa, S.; Anjum, N.; Ray, J. G.

    2017-02-01

    Automated face detection is the pivotal step in computer vision aided facial medical diagnosis and biometrics. This paper presents an automatic, subject adaptive framework for accurate face detection in the long infrared spectrum on our database for oral cancer detection consisting of malignant, precancerous and normal subjects of varied age group. Previous works on oral cancer detection using Digital Infrared Thermal Imaging(DITI) reveals that patients and normal subjects differ significantly in their facial thermal distribution. Therefore, it is a challenging task to formulate a completely adaptive framework to veraciously localize face from such a subject specific modality. Our model consists of first extracting the most probable facial regions by minimum error thresholding followed by ingenious adaptive methods to leverage the horizontal and vertical projections of the segmented thermal image. Additionally, the model incorporates our domain knowledge of exploiting temperature difference between strategic locations of the face. To our best knowledge, this is the pioneering work on detecting faces in thermal facial images comprising both patients and normal subjects. Previous works on face detection have not specifically targeted automated medical diagnosis; face bounding box returned by those algorithms are thus loose and not apt for further medical automation. Our algorithm significantly outperforms contemporary face detection algorithms in terms of commonly used metrics for evaluating face detection accuracy. Since our method has been tested on challenging dataset consisting of both patients and normal subjects of diverse age groups, it can be seamlessly adapted in any DITI guided facial healthcare or biometric applications.

  2. Research on three-dimensional reconstruction method based on binocular vision

    NASA Astrophysics Data System (ADS)

    Li, Jinlin; Wang, Zhihui; Wang, Minjun

    2018-03-01

    As the hot and difficult issue in computer vision, binocular stereo vision is an important form of computer vision,which has a broad application prospects in many computer vision fields,such as aerial mapping,vision navigation,motion analysis and industrial inspection etc.In this paper, a research is done into binocular stereo camera calibration, image feature extraction and stereo matching. In the binocular stereo camera calibration module, the internal parameters of a single camera are obtained by using the checkerboard lattice of zhang zhengyou the field of image feature extraction and stereo matching, adopted the SURF operator in the local feature operator and the SGBM algorithm in the global matching algorithm are used respectively, and the performance are compared. After completed the feature points matching, we can build the corresponding between matching points and the 3D object points using the camera parameters which are calibrated, which means the 3D information.

  3. Optical See-Through Head Mounted Display Direct Linear Transformation Calibration Robustness in the Presence of User Alignment Noise

    NASA Technical Reports Server (NTRS)

    Axholt, Magnus; Skoglund, Martin; Peterson, Stephen D.; Cooper, Matthew D.; Schoen, Thomas B.; Gustafsson, Fredrik; Ynnerman, Anders; Ellis, Stephen R.

    2010-01-01

    Augmented Reality (AR) is a technique by which computer generated signals synthesize impressions that are made to coexist with the surrounding real world as perceived by the user. Human smell, taste, touch and hearing can all be augmented, but most commonly AR refers to the human vision being overlaid with information otherwise not readily available to the user. A correct calibration is important on an application level, ensuring that e.g. data labels are presented at correct locations, but also on a system level to enable display techniques such as stereoscopy to function properly [SOURCE]. Thus, vital to AR, calibration methodology is an important research area. While great achievements already have been made, there are some properties in current calibration methods for augmenting vision which do not translate from its traditional use in automated cameras calibration to its use with a human operator. This paper uses a Monte Carlo simulation of a standard direct linear transformation camera calibration to investigate how user introduced head orientation noise affects the parameter estimation during a calibration procedure of an optical see-through head mounted display.

  4. Improving Pattern Recognition and Neural Network Algorithms with Applications to Solar Panel Energy Optimization

    NASA Astrophysics Data System (ADS)

    Zamora Ramos, Ernesto

    Artificial Intelligence is a big part of automation and with today's technological advances, artificial intelligence has taken great strides towards positioning itself as the technology of the future to control, enhance and perfect automation. Computer vision includes pattern recognition and classification and machine learning. Computer vision is at the core of decision making and it is a vast and fruitful branch of artificial intelligence. In this work, we expose novel algorithms and techniques built upon existing technologies to improve pattern recognition and neural network training, initially motivated by a multidisciplinary effort to build a robot that helps maintain and optimize solar panel energy production. Our contributions detail an improved non-linear pre-processing technique to enhance poorly illuminated images based on modifications to the standard histogram equalization for an image. While the original motivation was to improve nocturnal navigation, the results have applications in surveillance, search and rescue, medical imaging enhancing, and many others. We created a vision system for precise camera distance positioning motivated to correctly locate the robot for capture of solar panel images for classification. The classification algorithm marks solar panels as clean or dirty for later processing. Our algorithm extends past image classification and, based on historical and experimental data, it identifies the optimal moment in which to perform maintenance on marked solar panels as to minimize the energy and profit loss. In order to improve upon the classification algorithm, we delved into feedforward neural networks because of their recent advancements, proven universal approximation and classification capabilities, and excellent recognition rates. We explore state-of-the-art neural network training techniques offering pointers and insights, culminating on the implementation of a complete library with support for modern deep learning architectures, multilayer percepterons and convolutional neural networks. Our research with neural networks has encountered a great deal of difficulties regarding hyperparameter estimation for good training convergence rate and accuracy. Most hyperparameters, including architecture, learning rate, regularization, trainable parameters (or weights) initialization, and so on, are chosen via a trial and error process with some educated guesses. However, we developed the first quantitative method to compare weight initialization strategies, a critical hyperparameter choice during training, to estimate among a group of candidate strategies which would make the network converge to the highest classification accuracy faster with high probability. Our method provides a quick, objective measure to compare initialization strategies to select the best possible among them beforehand without having to complete multiple training sessions for each candidate strategy to compare final results.

  5. Using deep learning and Google Street View to estimate the demographic makeup of neighborhoods across the United States

    PubMed Central

    Gebru, Timnit; Krause, Jonathan; Wang, Yilun; Chen, Duyun; Deng, Jia; Aiden, Erez Lieberman; Fei-Fei, Li

    2017-01-01

    The United States spends more than $250 million each year on the American Community Survey (ACS), a labor-intensive door-to-door study that measures statistics relating to race, gender, education, occupation, unemployment, and other demographic factors. Although a comprehensive source of data, the lag between demographic changes and their appearance in the ACS can exceed several years. As digital imagery becomes ubiquitous and machine vision techniques improve, automated data analysis may become an increasingly practical supplement to the ACS. Here, we present a method that estimates socioeconomic characteristics of regions spanning 200 US cities by using 50 million images of street scenes gathered with Google Street View cars. Using deep learning-based computer vision techniques, we determined the make, model, and year of all motor vehicles encountered in particular neighborhoods. Data from this census of motor vehicles, which enumerated 22 million automobiles in total (8% of all automobiles in the United States), were used to accurately estimate income, race, education, and voting patterns at the zip code and precinct level. (The average US precinct contains ∼1,000 people.) The resulting associations are surprisingly simple and powerful. For instance, if the number of sedans encountered during a drive through a city is higher than the number of pickup trucks, the city is likely to vote for a Democrat during the next presidential election (88% chance); otherwise, it is likely to vote Republican (82%). Our results suggest that automated systems for monitoring demographics may effectively complement labor-intensive approaches, with the potential to measure demographics with fine spatial resolution, in close to real time. PMID:29183967

  6. Using deep learning and Google Street View to estimate the demographic makeup of neighborhoods across the United States.

    PubMed

    Gebru, Timnit; Krause, Jonathan; Wang, Yilun; Chen, Duyun; Deng, Jia; Aiden, Erez Lieberman; Fei-Fei, Li

    2017-12-12

    The United States spends more than $250 million each year on the American Community Survey (ACS), a labor-intensive door-to-door study that measures statistics relating to race, gender, education, occupation, unemployment, and other demographic factors. Although a comprehensive source of data, the lag between demographic changes and their appearance in the ACS can exceed several years. As digital imagery becomes ubiquitous and machine vision techniques improve, automated data analysis may become an increasingly practical supplement to the ACS. Here, we present a method that estimates socioeconomic characteristics of regions spanning 200 US cities by using 50 million images of street scenes gathered with Google Street View cars. Using deep learning-based computer vision techniques, we determined the make, model, and year of all motor vehicles encountered in particular neighborhoods. Data from this census of motor vehicles, which enumerated 22 million automobiles in total (8% of all automobiles in the United States), were used to accurately estimate income, race, education, and voting patterns at the zip code and precinct level. (The average US precinct contains ∼1,000 people.) The resulting associations are surprisingly simple and powerful. For instance, if the number of sedans encountered during a drive through a city is higher than the number of pickup trucks, the city is likely to vote for a Democrat during the next presidential election (88% chance); otherwise, it is likely to vote Republican (82%). Our results suggest that automated systems for monitoring demographics may effectively complement labor-intensive approaches, with the potential to measure demographics with fine spatial resolution, in close to real time. Copyright © 2017 the Author(s). Published by PNAS.

  7. Analyzing the Cohesion of English Text and Discourse with Automated Computer Tools

    ERIC Educational Resources Information Center

    Jeon, Moongee

    2014-01-01

    This article investigates the lexical and discourse features of English text and discourse with automated computer technologies. Specifically, this article examines the cohesion of English text and discourse with automated computer tools, Coh-Metrix and TEES. Coh-Metrix is a text analysis computer tool that can analyze English text and discourse…

  8. Smartphone, tablet computer and e-reader use by people with vision impairment.

    PubMed

    Crossland, Michael D; Silva, Rui S; Macedo, Antonio F

    2014-09-01

    Consumer electronic devices such as smartphones, tablet computers, and e-book readers have become far more widely used in recent years. Many of these devices contain accessibility features such as large print and speech. Anecdotal experience suggests people with vision impairment frequently make use of these systems. Here we survey people with self-identified vision impairment to determine their use of this equipment. An internet-based survey was advertised to people with vision impairment by word of mouth, social media, and online. Respondents were asked demographic information, what devices they owned, what they used these devices for, and what accessibility features they used. One hundred and thirty-two complete responses were received. Twenty-six percent of the sample reported that they had no vision and the remainder reported they had low vision. One hundred and seven people (81%) reported using a smartphone. Those with no vision were as likely to use a smartphone or tablet as those with low vision. Speech was found useful by 59% of smartphone users. Fifty-one percent of smartphone owners used the camera and screen as a magnifier. Forty-eight percent of the sample used a tablet computer, and 17% used an e-book reader. The most frequently cited reason for not using these devices included cost and lack of interest. Smartphones, tablet computers, and e-book readers can be used by people with vision impairment. Speech is used by people with low vision as well as those with no vision. Many of our (self-selected) group used their smartphone camera and screen as a magnifier, and others used the camera flash as a spotlight. © 2014 The Authors Ophthalmic & Physiological Optics © 2014 The College of Optometrists.

  9. An embedded vision system for an unmanned four-rotor helicopter

    NASA Astrophysics Data System (ADS)

    Lillywhite, Kirt; Lee, Dah-Jye; Tippetts, Beau; Fowers, Spencer; Dennis, Aaron; Nelson, Brent; Archibald, James

    2006-10-01

    In this paper an embedded vision system and control module is introduced that is capable of controlling an unmanned four-rotor helicopter and processing live video for various law enforcement, security, military, and civilian applications. The vision system is implemented on a newly designed compact FPGA board (Helios). The Helios board contains a Xilinx Virtex-4 FPGA chip and memory making it capable of implementing real time vision algorithms. A Smooth Automated Intelligent Leveling daughter board (SAIL), attached to the Helios board, collects attitude and heading information to be processed in order to control the unmanned helicopter. The SAIL board uses an electrolytic tilt sensor, compass, voltage level converters, and analog to digital converters to perform its operations. While level flight can be maintained, problems stemming from the characteristics of the tilt sensor limits maneuverability of the helicopter. The embedded vision system has proven to give very good results in its performance of a number of real-time robotic vision algorithms.

  10. Automatic micropropagation of plants--the vision-system: graph rewriting as pattern recognition

    NASA Astrophysics Data System (ADS)

    Schwanke, Joerg; Megnet, Roland; Jensch, Peter F.

    1993-03-01

    The automation of plant-micropropagation is necessary to produce high amounts of biomass. Plants have to be dissected on particular cutting-points. A vision-system is needed for the recognition of the cutting-points on the plants. With this background, this contribution is directed to the underlying formalism to determine cutting-points on abstract-plant models. We show the usefulness of pattern recognition by graph-rewriting along with some examples in this context.

  11. Development of a machine vision system for automated structural assembly

    NASA Technical Reports Server (NTRS)

    Sydow, P. Daniel; Cooper, Eric G.

    1992-01-01

    Research is being conducted at the LaRC to develop a telerobotic assembly system designed to construct large space truss structures. This research program was initiated within the past several years, and a ground-based test-bed was developed to evaluate and expand the state of the art. Test-bed operations currently use predetermined ('taught') points for truss structural assembly. Total dependence on the use of taught points for joint receptacle capture and strut installation is neither robust nor reliable enough for space operations. Therefore, a machine vision sensor guidance system is being developed to locate and guide the robot to a passive target mounted on the truss joint receptacle. The vision system hardware includes a miniature video camera, passive targets mounted on the joint receptacles, target illumination hardware, and an image processing system. Discrimination of the target from background clutter is accomplished through standard digital processing techniques. Once the target is identified, a pose estimation algorithm is invoked to determine the location, in three-dimensional space, of the target relative to the robots end-effector. Preliminary test results of the vision system in the Automated Structural Assembly Laboratory with a range of lighting and background conditions indicate that it is fully capable of successfully identifying joint receptacle targets throughout the required operational range. Controlled optical bench test results indicate that the system can also provide the pose estimation accuracy to define the target position.

  12. Computer vision-based analysis of foods: a non-destructive colour measurement tool to monitor quality and safety.

    PubMed

    Mogol, Burçe Ataç; Gökmen, Vural

    2014-05-01

    Computer vision-based image analysis has been widely used in food industry to monitor food quality. It allows low-cost and non-contact measurements of colour to be performed. In this paper, two computer vision-based image analysis approaches are discussed to extract mean colour or featured colour information from the digital images of foods. These types of information may be of particular importance as colour indicates certain chemical changes or physical properties in foods. As exemplified here, the mean CIE a* value or browning ratio determined by means of computer vision-based image analysis algorithms can be correlated with acrylamide content of potato chips or cookies. Or, porosity index as an important physical property of breadcrumb can be calculated easily. In this respect, computer vision-based image analysis provides a useful tool for automatic inspection of food products in a manufacturing line, and it can be actively involved in the decision-making process where rapid quality/safety evaluation is needed. © 2013 Society of Chemical Industry.

  13. A traffic situation analysis system

    NASA Astrophysics Data System (ADS)

    Sidla, Oliver; Rosner, Marcin

    2011-01-01

    The observation and monitoring of traffic with smart visions systems for the purpose of improving traffic safety has a big potential. For example embedded vision systems built into vehicles can be used as early warning systems, or stationary camera systems can modify the switching frequency of signals at intersections. Today the automated analysis of traffic situations is still in its infancy - the patterns of vehicle motion and pedestrian flow in an urban environment are too complex to be fully understood by a vision system. We present steps towards such a traffic monitoring system which is designed to detect potentially dangerous traffic situations, especially incidents in which the interaction of pedestrians and vehicles might develop into safety critical encounters. The proposed system is field-tested at a real pedestrian crossing in the City of Vienna for the duration of one year. It consists of a cluster of 3 smart cameras, each of which is built from a very compact PC hardware system in an outdoor capable housing. Two cameras run vehicle detection software including license plate detection and recognition, one camera runs a complex pedestrian detection and tracking module based on the HOG detection principle. As a supplement, all 3 cameras use additional optical flow computation in a low-resolution video stream in order to estimate the motion path and speed of objects. This work describes the foundation for all 3 different object detection modalities (pedestrians, vehi1cles, license plates), and explains the system setup and its design.

  14. Automated grading system for evaluation of ocular redness associated with dry eye

    PubMed Central

    Rodriguez, John D; Johnston, Patrick R; Ousler, George W; Smith, Lisa M; Abelson, Mark B

    2013-01-01

    Background We have observed that dry eye redness is characterized by a prominence of fine horizontal conjunctival vessels in the exposed ocular surface of the interpalpebral fissure, and have incorporated this feature into the grading of redness in clinical studies of dry eye. Aim To develop an automated method of grading dry eye-associated ocular redness in order to expand on the clinical grading system currently used. Methods Ninety nine images from 26 dry eye subjects were evaluated by five graders using a 0–4 (in 0.5 increments) dry eye redness (Ora Calibra™ Dry Eye Redness Scale [OCDER]) scale. For the automated method, the Opencv computer vision library was used to develop software for calculating redness and horizontal conjunctival vessels (noted as “horizontality”). From original photograph, the region of interest (ROI) was selected manually using the open source ImageJ software. Total average redness intensity (Com-Red) was calculated as a single channel 8-bit image as R – 0.83G – 0.17B, where R, G and B were the respective intensities of the red, green and blue channels. The location of vessels was detected by normalizing the blue channel and selecting pixels with an intensity of less than 97% of the mean. The horizontal component (Com-Hor) was calculated by the first order Sobel derivative in the vertical direction and the score was calculated as the average blue channel image intensity of this vertical derivative. Pearson correlation coefficients, accuracy and concordance correlation coefficients (CCC) were calculated after regression and standardized regression of the dataset. Results The agreement (both Pearson’s and CCC) among investigators using the OCDER scale was 0.67, while the agreement of investigator to computer was 0.76. A multiple regression using both redness and horizontality improved the agreement CCC from 0.66 and 0.69 to 0.76, demonstrating the contribution of vessel geometry to the overall grade. Computer analysis of a given image has 100% repeatability and zero variability from session to session. Conclusion This objective means of grading ocular redness in a unified fashion has potential significance as a new clinical endpoint. In comparisons between computer and investigator, computer grading proved to be more reliable than another investigator using the OCDER scale. The best fitting model based on the present sample, and usable for future studies, was C4=−12.24+2.12C2HOR+0.88C2RED:C4 is the predicted investigator grade, and C2HOR and C2RED are logarithmic transformations of the computer calculated parameters COM-Hor and COM-Red. Considering the superior repeatability, computer automated grading might be preferable to investigator grading in multicentered dry eye studies in which the subtle differences in redness incurred by treatment have been historically difficult to define. PMID:23814457

  15. Deep Neural Networks: A New Framework for Modeling Biological Vision and Brain Information Processing.

    PubMed

    Kriegeskorte, Nikolaus

    2015-11-24

    Recent advances in neural network modeling have enabled major strides in computer vision and other artificial intelligence applications. Human-level visual recognition abilities are coming within reach of artificial systems. Artificial neural networks are inspired by the brain, and their computations could be implemented in biological neurons. Convolutional feedforward networks, which now dominate computer vision, take further inspiration from the architecture of the primate visual hierarchy. However, the current models are designed with engineering goals, not to model brain computations. Nevertheless, initial studies comparing internal representations between these models and primate brains find surprisingly similar representational spaces. With human-level performance no longer out of reach, we are entering an exciting new era, in which we will be able to build biologically faithful feedforward and recurrent computational models of how biological brains perform high-level feats of intelligence, including vision.

  16. Job-shop scheduling applied to computer vision

    NASA Astrophysics Data System (ADS)

    Sebastian y Zuniga, Jose M.; Torres-Medina, Fernando; Aracil, Rafael; Reinoso, Oscar; Jimenez, Luis M.; Garcia, David

    1997-09-01

    This paper presents a method for minimizing the total elapsed time spent by n tasks running on m differents processors working in parallel. The developed algorithm not only minimizes the total elapsed time but also reduces the idle time and waiting time of in-process tasks. This condition is very important in some applications of computer vision in which the time to finish the total process is particularly critical -- quality control in industrial inspection, real- time computer vision, guided robots. The scheduling algorithm is based on the use of two matrices, obtained from the precedence relationships between tasks, and the data obtained from the two matrices. The developed scheduling algorithm has been tested in one application of quality control using computer vision. The results obtained have been satisfactory in the application of different image processing algorithms.

  17. Development of the method of aggregation to determine the current storage area using computer vision and radiofrequency identification

    NASA Astrophysics Data System (ADS)

    Astafiev, A.; Orlov, A.; Privezencev, D.

    2018-01-01

    The article is devoted to the development of technology and software for the construction of positioning and control systems in industrial plants based on aggregation to determine the current storage area using computer vision and radiofrequency identification. It describes the developed of the project of hardware for industrial products positioning system in the territory of a plant on the basis of radio-frequency grid. It describes the development of the project of hardware for industrial products positioning system in the plant on the basis of computer vision methods. It describes the development of the method of aggregation to determine the current storage area using computer vision and radiofrequency identification. Experimental studies in laboratory and production conditions have been conducted and described in the article.

  18. Enhanced computer vision with Microsoft Kinect sensor: a review.

    PubMed

    Han, Jungong; Shao, Ling; Xu, Dong; Shotton, Jamie

    2013-10-01

    With the invention of the low-cost Microsoft Kinect sensor, high-resolution depth and visual (RGB) sensing has become available for widespread use. The complementary nature of the depth and visual information provided by the Kinect sensor opens up new opportunities to solve fundamental problems in computer vision. This paper presents a comprehensive review of recent Kinect-based computer vision algorithms and applications. The reviewed approaches are classified according to the type of vision problems that can be addressed or enhanced by means of the Kinect sensor. The covered topics include preprocessing, object tracking and recognition, human activity analysis, hand gesture analysis, and indoor 3-D mapping. For each category of methods, we outline their main algorithmic contributions and summarize their advantages/differences compared to their RGB counterparts. Finally, we give an overview of the challenges in this field and future research trends. This paper is expected to serve as a tutorial and source of references for Kinect-based computer vision researchers.

  19. Texture and art with deep neural networks.

    PubMed

    Gatys, Leon A; Ecker, Alexander S; Bethge, Matthias

    2017-10-01

    Although the study of biological vision and computer vision attempt to understand powerful visual information processing from different angles, they have a long history of informing each other. Recent advances in texture synthesis that were motivated by visual neuroscience have led to a substantial advance in image synthesis and manipulation in computer vision using convolutional neural networks (CNNs). Here, we review these recent advances and discuss how they can in turn inspire new research in visual perception and computational neuroscience. Copyright © 2017. Published by Elsevier Ltd.

  20. IntraFace

    PubMed Central

    De la Torre, Fernando; Chu, Wen-Sheng; Xiong, Xuehan; Vicente, Francisco; Ding, Xiaoyu; Cohn, Jeffrey

    2016-01-01

    Within the last 20 years, there has been an increasing interest in the computer vision community in automated facial image analysis algorithms. This has been driven by applications in animation, market research, autonomous-driving, surveillance, and facial editing among others. To date, there exist several commercial packages for specific facial image analysis tasks such as facial expression recognition, facial attribute analysis or face tracking. However, free and easy-to-use software that incorporates all these functionalities is unavailable. This paper presents IntraFace (IF), a publicly-available software package for automated facial feature tracking, head pose estimation, facial attribute recognition, and facial expression analysis from video. In addition, IFincludes a newly develop technique for unsupervised synchrony detection to discover correlated facial behavior between two or more persons, a relatively unexplored problem in facial image analysis. In tests, IF achieved state-of-the-art results for emotion expression and action unit detection in three databases, FERA, CK+ and RU-FACS; measured audience reaction to a talk given by one of the authors; and discovered synchrony for smiling in videos of parent-infant interaction. IF is free of charge for academic use at http://www.humansensing.cs.cmu.edu/intraface/. PMID:27346987

  1. Robotic Variable Polarity Plasma Arc (VPPA) Welding

    NASA Technical Reports Server (NTRS)

    Jaffery, Waris S.

    1993-01-01

    The need for automated plasma welding was identified in the early stages of the Space Station Freedom Program (SSFP) because it requires approximately 1.3 miles of welding for assembly. As a result of the Variable Polarity Plasma Arc Welding (VPPAW) process's ability to make virtually defect-free welds in aluminum, it was chosen to fulfill the welding needs. Space Station Freedom will be constructed of 2219 aluminum utilizing the computer controlled VPPAW process. The 'Node Radial Docking Port', with it's saddle shaped weld path, has a constantly changing surface angle over 360 deg of the 282 inch weld. The automated robotic VPPAW process requires eight-axes of motion (six-axes of robot and two-axes of positioner movement). The robot control system is programmed to maintain Torch Center Point (TCP) orientation perpendicular to the part while the part positioner is tilted and rotated to maintain the vertical up orientation as required by the VPPAW process. The combined speed of the robot and the positioner are integrated to maintain a constant speed between the part and the torch. A laser-based vision sensor system has also been integrated to track the seam and map the surface of the profile during welding.

  2. Hierarchical extraction of urban objects from mobile laser scanning data

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Dong, Zhen; Zhao, Gang; Dai, Wenxia

    2015-01-01

    Point clouds collected in urban scenes contain a huge number of points (e.g., billions), numerous objects with significant size variability, complex and incomplete structures, and variable point densities, raising great challenges for the automated extraction of urban objects in the field of photogrammetry, computer vision, and robotics. This paper addresses these challenges by proposing an automated method to extract urban objects robustly and efficiently. The proposed method generates multi-scale supervoxels from 3D point clouds using the point attributes (e.g., colors, intensities) and spatial distances between points, and then segments the supervoxels rather than individual points by combining graph based segmentation with multiple cues (e.g., principal direction, colors) of the supervoxels. The proposed method defines a set of rules for merging segments into meaningful units according to types of urban objects and forms the semantic knowledge of urban objects for the classification of objects. Finally, the proposed method extracts and classifies urban objects in a hierarchical order ranked by the saliency of the segments. Experiments show that the proposed method is efficient and robust for extracting buildings, streetlamps, trees, telegraph poles, traffic signs, cars, and enclosures from mobile laser scanning (MLS) point clouds, with an overall accuracy of 92.3%.

  3. Robotic Variable Polarity Plasma Arc (VPPA) welding

    NASA Astrophysics Data System (ADS)

    Jaffery, Waris S.

    1993-02-01

    The need for automated plasma welding was identified in the early stages of the Space Station Freedom Program (SSFP) because it requires approximately 1.3 miles of welding for assembly. As a result of the Variable Polarity Plasma Arc Welding (VPPAW) process's ability to make virtually defect-free welds in aluminum, it was chosen to fulfill the welding needs. Space Station Freedom will be constructed of 2219 aluminum utilizing the computer controlled VPPAW process. The 'Node Radial Docking Port', with it's saddle shaped weld path, has a constantly changing surface angle over 360 deg of the 282 inch weld. The automated robotic VPPAW process requires eight-axes of motion (six-axes of robot and two-axes of positioner movement). The robot control system is programmed to maintain Torch Center Point (TCP) orientation perpendicular to the part while the part positioner is tilted and rotated to maintain the vertical up orientation as required by the VPPAW process. The combined speed of the robot and the positioner are integrated to maintain a constant speed between the part and the torch. A laser-based vision sensor system has also been integrated to track the seam and map the surface of the profile during welding.

  4. Development of AN All-Purpose Free Photogrammetric Tool

    NASA Astrophysics Data System (ADS)

    González-Aguilera, D.; López-Fernández, L.; Rodriguez-Gonzalvez, P.; Guerrero, D.; Hernandez-Lopez, D.; Remondino, F.; Menna, F.; Nocerino, E.; Toschi, I.; Ballabeni, A.; Gaiani, M.

    2016-06-01

    Photogrammetry is currently facing some challenges and changes mainly related to automation, ubiquitous processing and variety of applications. Within an ISPRS Scientific Initiative a team of researchers from USAL, UCLM, FBK and UNIBO have developed an open photogrammetric tool, called GRAPHOS (inteGRAted PHOtogrammetric Suite). GRAPHOS allows to obtain dense and metric 3D point clouds from terrestrial and UAV images. It encloses robust photogrammetric and computer vision algorithms with the following aims: (i) increase automation, allowing to get dense 3D point clouds through a friendly and easy-to-use interface; (ii) increase flexibility, working with any type of images, scenarios and cameras; (iii) improve quality, guaranteeing high accuracy and resolution; (iv) preserve photogrammetric reliability and repeatability. Last but not least, GRAPHOS has also an educational component reinforced with some didactical explanations about algorithms and their performance. The developments were carried out at different levels: GUI realization, image pre-processing, photogrammetric processing with weight parameters, dataset creation and system evaluation. The paper will present in detail the developments of GRAPHOS with all its photogrammetric components and the evaluation analyses based on various image datasets. GRAPHOS is distributed for free for research and educational needs.

  5. Gamifying Video Object Segmentation.

    PubMed

    Spampinato, Concetto; Palazzo, Simone; Giordano, Daniela

    2017-10-01

    Video object segmentation can be considered as one of the most challenging computer vision problems. Indeed, so far, no existing solution is able to effectively deal with the peculiarities of real-world videos, especially in cases of articulated motion and object occlusions; limitations that appear more evident when we compare the performance of automated methods with the human one. However, manually segmenting objects in videos is largely impractical as it requires a lot of time and concentration. To address this problem, in this paper we propose an interactive video object segmentation method, which exploits, on one hand, the capability of humans to identify correctly objects in visual scenes, and on the other hand, the collective human brainpower to solve challenging and large-scale tasks. In particular, our method relies on a game with a purpose to collect human inputs on object locations, followed by an accurate segmentation phase achieved by optimizing an energy function encoding spatial and temporal constraints between object regions as well as human-provided location priors. Performance analysis carried out on complex video benchmarks, and exploiting data provided by over 60 users, demonstrated that our method shows a better trade-off between annotation times and segmentation accuracy than interactive video annotation and automated video object segmentation approaches.

  6. IntraFace.

    PubMed

    De la Torre, Fernando; Chu, Wen-Sheng; Xiong, Xuehan; Vicente, Francisco; Ding, Xiaoyu; Cohn, Jeffrey

    2015-05-01

    Within the last 20 years, there has been an increasing interest in the computer vision community in automated facial image analysis algorithms. This has been driven by applications in animation, market research, autonomous-driving, surveillance, and facial editing among others. To date, there exist several commercial packages for specific facial image analysis tasks such as facial expression recognition, facial attribute analysis or face tracking. However, free and easy-to-use software that incorporates all these functionalities is unavailable. This paper presents IntraFace (IF), a publicly-available software package for automated facial feature tracking, head pose estimation, facial attribute recognition, and facial expression analysis from video. In addition, IFincludes a newly develop technique for unsupervised synchrony detection to discover correlated facial behavior between two or more persons, a relatively unexplored problem in facial image analysis. In tests, IF achieved state-of-the-art results for emotion expression and action unit detection in three databases, FERA, CK+ and RU-FACS; measured audience reaction to a talk given by one of the authors; and discovered synchrony for smiling in videos of parent-infant interaction. IF is free of charge for academic use at http://www.humansensing.cs.cmu.edu/intraface/.

  7. Automated packing systems: review of industrial implementations

    NASA Astrophysics Data System (ADS)

    Whelan, Paul F.; Batchelor, Bruce G.

    1993-08-01

    A rich theoretical background to the problems that occur in the automation of material handling can be found in operations research, production engineering, systems engineering and automation, more specifically machine vision, literature. This work has contributed towards the design of intelligent handling systems. This paper will review the application of these automated material handling and packing techniques to industrial problems. The discussion will also highlight the systems integration issues involved in these applications. An outline of one such industrial application, the automated placement of shape templates on to leather hides, is also discussed. The purpose of this system is to arrange shape templates on a leather hide in an efficient manner, so as to minimize the leather waste, before they are automatically cut from the hide. These pieces are used in the furniture and car manufacturing industries for the upholstery of high quality leather chairs and car seats. Currently this type of operation is semi-automated. The paper will outline the problems involved in the full automation of such a procedure.

  8. Learning-based scan plane identification from fetal head ultrasound images

    NASA Astrophysics Data System (ADS)

    Liu, Xiaoming; Annangi, Pavan; Gupta, Mithun; Yu, Bing; Padfield, Dirk; Banerjee, Jyotirmoy; Krishnan, Kajoli

    2012-03-01

    Acquisition of a clinically acceptable scan plane is a pre-requisite for ultrasonic measurement of anatomical features from B-mode images. In obstetric ultrasound, measurement of gestational age predictors, such as biparietal diameter and head circumference, is performed at the level of the thalami and cavum septum pelucidi. In an accurate scan plane, the head can be modeled as an ellipse, the thalami looks like a butterfly, the cavum appears like an empty box and the falx is a straight line along the major axis of a symmetric ellipse inclined either parallel to or at small angles to the probe surface. Arriving at the correct probe placement on the mother's belly to obtain an accurate scan plane is a task of considerable challenge especially for a new user of ultrasound. In this work, we present a novel automated learning-based algorithm to identify an acceptable fetal head scan plane. We divide the problem into cranium detection and a template matching to capture the composite "butterfly" structure present inside the head, which mimics the visual cues used by an expert. The algorithm uses the stateof- the-art Active Appearance Models techniques from the image processing and computer vision literature and tie them to presence or absence of the inclusions within the head to automatically compute a score to represent the goodness of a scan plane. This automated technique can be potentially used to train and aid new users of ultrasound.

  9. Computerized image analysis: estimation of breast density on mammograms

    NASA Astrophysics Data System (ADS)

    Zhou, Chuan; Chan, Heang-Ping; Petrick, Nicholas; Sahiner, Berkman; Helvie, Mark A.; Roubidoux, Marilyn A.; Hadjiiski, Lubomir M.; Goodsitt, Mitchell M.

    2000-06-01

    An automated image analysis tool is being developed for estimation of mammographic breast density, which may be useful for risk estimation or for monitoring breast density change in a prevention or intervention program. A mammogram is digitized using a laser scanner and the resolution is reduced to a pixel size of 0.8 mm X 0.8 mm. Breast density analysis is performed in three stages. First, the breast region is segmented from the surrounding background by an automated breast boundary-tracking algorithm. Second, an adaptive dynamic range compression technique is applied to the breast image to reduce the range of the gray level distribution in the low frequency background and to enhance the differences in the characteristic features of the gray level histogram for breasts of different densities. Third, rule-based classification is used to classify the breast images into several classes according to the characteristic features of their gray level histogram. For each image, a gray level threshold is automatically determined to segment the dense tissue from the breast region. The area of segmented dense tissue as a percentage of the breast area is then estimated. In this preliminary study, we analyzed the interobserver variation of breast density estimation by two experienced radiologists using BI-RADS lexicon. The radiologists' visually estimated percent breast densities were compared with the computer's calculation. The results demonstrate the feasibility of estimating mammographic breast density using computer vision techniques and its potential to improve the accuracy and reproducibility in comparison with the subjective visual assessment by radiologists.

  10. Automated Tracking of Animal Posture and Movement during Exploration and Sensory Orientation Behaviors

    PubMed Central

    Gomez-Marin, Alex; Partoune, Nicolas; Stephens, Greg J.; Louis, Matthieu

    2012-01-01

    Background The nervous functions of an organism are primarily reflected in the behavior it is capable of. Measuring behavior quantitatively, at high-resolution and in an automated fashion provides valuable information about the underlying neural circuit computation. Accordingly, computer-vision applications for animal tracking are becoming a key complementary toolkit to genetic, molecular and electrophysiological characterization in systems neuroscience. Methodology/Principal Findings We present Sensory Orientation Software (SOS) to measure behavior and infer sensory experience correlates. SOS is a simple and versatile system to track body posture and motion of single animals in two-dimensional environments. In the presence of a sensory landscape, tracking the trajectory of the animal's sensors and its postural evolution provides a quantitative framework to study sensorimotor integration. To illustrate the utility of SOS, we examine the orientation behavior of fruit fly larvae in response to odor, temperature and light gradients. We show that SOS is suitable to carry out high-resolution behavioral tracking for a wide range of organisms including flatworms, fishes and mice. Conclusions/Significance Our work contributes to the growing repertoire of behavioral analysis tools for collecting rich and fine-grained data to draw and test hypothesis about the functioning of the nervous system. By providing open-access to our code and documenting the software design, we aim to encourage the adaptation of SOS by a wide community of non-specialists to their particular model organism and questions of interest. PMID:22912674

  11. Machine vision for digital microfluidics

    NASA Astrophysics Data System (ADS)

    Shin, Yong-Jun; Lee, Jeong-Bong

    2010-01-01

    Machine vision is widely used in an industrial environment today. It can perform various tasks, such as inspecting and controlling production processes, that may require humanlike intelligence. The importance of imaging technology for biological research or medical diagnosis is greater than ever. For example, fluorescent reporter imaging enables scientists to study the dynamics of gene networks with high spatial and temporal resolution. Such high-throughput imaging is increasingly demanding the use of machine vision for real-time analysis and control. Digital microfluidics is a relatively new technology with expectations of becoming a true lab-on-a-chip platform. Utilizing digital microfluidics, only small amounts of biological samples are required and the experimental procedures can be automatically controlled. There is a strong need for the development of a digital microfluidics system integrated with machine vision for innovative biological research today. In this paper, we show how machine vision can be applied to digital microfluidics by demonstrating two applications: machine vision-based measurement of the kinetics of biomolecular interactions and machine vision-based droplet motion control. It is expected that digital microfluidics-based machine vision system will add intelligence and automation to high-throughput biological imaging in the future.

  12. Impact of computer use on children's vision.

    PubMed

    Kozeis, N

    2009-10-01

    Today, millions of children use computers on a daily basis. Extensive viewing of the computer screen can lead to eye discomfort, fatigue, blurred vision and headaches, dry eyes and other symptoms of eyestrain. These symptoms may be caused by poor lighting, glare, an improper work station set-up, vision problems of which the person was not previously aware, or a combination of these factors. Children can experience many of the same symptoms related to computer use as adults. However, some unique aspects of how children use computers may make them more susceptible than adults to the development of these problems. In this study, the most common eye symptoms related to computer use in childhood, the possible causes and ways to avoid them are reviewed.

  13. Automated cassette-to-cassette substrate handling system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kraus, Joseph Arthur; Boyer, Jeremy James; Mack, Joseph

    2014-03-18

    An automated cassette-to-cassette substrate handling system includes a cassette storage module for storing a plurality of substrates in cassettes before and after processing. A substrate carrier storage module stores a plurality of substrate carriers. A substrate carrier loading/unloading module loads substrates from the cassette storage module onto the plurality of substrate carriers and unloads substrates from the plurality of substrate carriers to the cassette storage module. A transport mechanism transports the plurality of substrates between the cassette storage module and the plurality of substrate carriers and transports the plurality of substrate carriers between the substrate carrier loading/unloading module and amore » processing chamber. A vision system recognizes recesses in the plurality of substrate carriers corresponding to empty substrate positions in the substrate carrier. A processor receives data from the vision system and instructs the transport mechanism to transport substrates to positions on the substrate carrier in response to the received data.« less

  14. WASS: An open-source pipeline for 3D stereo reconstruction of ocean waves

    NASA Astrophysics Data System (ADS)

    Bergamasco, Filippo; Torsello, Andrea; Sclavo, Mauro; Barbariol, Francesco; Benetazzo, Alvise

    2017-10-01

    Stereo 3D reconstruction of ocean waves is gaining more and more popularity in the oceanographic community and industry. Indeed, recent advances of both computer vision algorithms and computer processing power now allow the study of the spatio-temporal wave field with unprecedented accuracy, especially at small scales. Even if simple in theory, multiple details are difficult to be mastered for a practitioner, so that the implementation of a sea-waves 3D reconstruction pipeline is in general considered a complex task. For instance, camera calibration, reliable stereo feature matching and mean sea-plane estimation are all factors for which a well designed implementation can make the difference to obtain valuable results. For this reason, we believe that the open availability of a well tested software package that automates the reconstruction process from stereo images to a 3D point cloud would be a valuable addition for future researches in this area. We present WASS (http://www.dais.unive.it/wass), an Open-Source stereo processing pipeline for sea waves 3D reconstruction. Our tool completely automates all the steps required to estimate dense point clouds from stereo images. Namely, it computes the extrinsic parameters of the stereo rig so that no delicate calibration has to be performed on the field. It implements a fast 3D dense stereo reconstruction procedure based on the consolidated OpenCV library and, lastly, it includes set of filtering techniques both on the disparity map and the produced point cloud to remove the vast majority of erroneous points that can naturally arise while analyzing the optically complex nature of the water surface. In this paper, we describe the architecture of WASS and the internal algorithms involved. The pipeline workflow is shown step-by-step and demonstrated on real datasets acquired at sea.

  15. Neo-Symbiosis: The Next Stage in the Evolution of Human Information Interaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Griffith, Douglas; Greitzer, Frank L.

    We re-address the vision of human-computer symbiosis expressed by J. C. R. Licklider nearly a half-century ago, when he wrote: “The hope is that in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.” (Licklider, 1960). Unfortunately, little progress was made toward this vision over four decades following Licklider’s challenge, despite significant advancements in the fields of human factors and computer science. Licklider’s vision wasmore » largely forgotten. However, recent advances in information science and technology, psychology, and neuroscience have rekindled the potential of making the Licklider’s vision a reality. This paper provides a historical context for and updates the vision, and it argues that such a vision is needed as a unifying framework for advancing IS&T.« less

  16. Spatial Statistics for Tumor Cell Counting and Classification

    NASA Astrophysics Data System (ADS)

    Wirjadi, Oliver; Kim, Yoo-Jin; Breuel, Thomas

    To count and classify cells in histological sections is a standard task in histology. One example is the grading of meningiomas, benign tumors of the meninges, which requires to assess the fraction of proliferating cells in an image. As this process is very time consuming when performed manually, automation is required. To address such problems, we propose a novel application of Markov point process methods in computer vision, leading to algorithms for computing the locations of circular objects in images. In contrast to previous algorithms using such spatial statistics methods in image analysis, the present one is fully trainable. This is achieved by combining point process methods with statistical classifiers. Using simulated data, the method proposed in this paper will be shown to be more accurate and more robust to noise than standard image processing methods. On the publicly available SIMCEP benchmark for cell image analysis algorithms, the cell count performance of the present paper is significantly more accurate than results published elsewhere, especially when cells form dense clusters. Furthermore, the proposed system performs as well as a state-of-the-art algorithm for the computer-aided histological grading of meningiomas when combined with a simple k-nearest neighbor classifier for identifying proliferating cells.

  17. A novel computational approach for simultaneous tracking and feature extraction of C. elegans populations in fluid environments.

    PubMed

    Tsechpenakis, Gabriel; Bianchi, Laura; Metaxas, Dimitris; Driscoll, Monica

    2008-05-01

    The nematode Caenorhabditis elegans (C. elegans) is a genetic model widely used to dissect conserved basic biological mechanisms of development and nervous system function. C. elegans locomotion is under complex neuronal regulation and is impacted by genetic and environmental factors; thus, its analysis is expected to shed light on how genetic, environmental, and pathophysiological processes control behavior. To date, computer-based approaches have been used for analysis of C. elegans locomotion; however, none of these is both high resolution and high throughput. We used computer vision methods to develop a novel automated approach for analyzing the C. elegans locomotion. Our method provides information on the position, trajectory, and body shape during locomotion and is designed to efficiently track multiple animals (C. elegans) in cluttered images and under lighting variations. We used this method to describe in detail C. elegans movement in liquid for the first time and to analyze six unc-8, one mec-4, and one odr-1 mutants. We report features of nematode swimming not previously noted and show that our method detects differences in the swimming profile of mutants that appear at first glance similar.

  18. The Change to Administrative Computing in Schools.

    ERIC Educational Resources Information Center

    Brown, Daniel J.

    1984-01-01

    Describes a study of the process of school office automation which focuses on personnel reactions to administrative computing, what users view as advantages and disadvantages of the automation, perceived barriers and facilitators of the change to automation, school personnel view of long term effects, and implications for school computer policy.…

  19. Fundamentals of Library Automation and Technology. Participant Workbook.

    ERIC Educational Resources Information Center

    Bridge, Frank; Walton, Robert

    This workbook presents outlines of topics to be covered during a two-day workshop on the fundamentals for library automation. Topics for the first day include: (1) Introduction; (2) Computer Technology--A Historical Overview; (3) Evolution of Library Automation; (4) Computer Hardware Technology--An Introduction; (5) Computer Software…

  20. System for Computer Automated Typesetting (SCAT) of Computer Authored Texts.

    ERIC Educational Resources Information Center

    Keeler, F. Laurence

    This description of the System for Automated Typesetting (SCAT), an automated system for typesetting text and inserting special graphic symbols in programmed instructional materials created by the computer aided authoring system AUTHOR, provides an outline of the design architecture of the system and an overview including the component…

  1. Computer Vision Syndrome: Implications for the Occupational Health Nurse.

    PubMed

    Lurati, Ann Regina

    2018-02-01

    Computers and other digital devices are commonly used both in the workplace and during leisure time. Computer vision syndrome (CVS) is a new health-related condition that negatively affects workers. This article reviews the pathology of and interventions for CVS with implications for the occupational health nurse.

  2. Photogrammetry on glaciers: Old and new knowledge

    NASA Astrophysics Data System (ADS)

    Pfeffer, W. T.; Welty, E.; O'Neel, S.

    2014-12-01

    In the past few decades terrestrial photogrammetry has become a widely used tool for glaciological research, brought about in part by the proliferation of high-quality, low-cost digital cameras, dramatic increases in image-processing power of computers, and very innovative progress in image processing, much of which has come from computer vision research and from the computer gaming industry. At present, glaciologists have developed their capacity to gather images much further than their ability to process them. Many researchers have accumulated vast inventories of imagery, but have no efficient means to extract the data they desire from them. In many cases these are single-image time series where the processing limitation lies in the paucity of methods to obtain 3-dimension object space information from measurements in the 2-dimensional image space; in other cases camera pairs have been operated but no automated means is in hand for conventional stereometric analysis of many thousands of image pairs. Often the processing task is further complicated by weak camera geometry or ground control distribution, either of which will compromise the quality of 3-dimensional object space solutions. Solutions exist for many of these problems, found sometimes among the latest computer vision results, and sometimes buried in decades-old pre-digital terrestrial photogrammetric literature. Other problems, particularly those arising from poorly constrained or underdetermined camera and ground control geometry, may be unsolvable. Small-scale, ground-based photography and photogrammetry of glaciers has grown over the past few decades in an organic and disorganized fashion, with much duplication of effort and little coordination or sharing of knowledge among researchers. Given the utility of terrestrial photogrammetry, its low cost (if properly developed and implemented), and the substantial value of the information to be had from it, some further effort to share knowledge and methods would be a great benefit for the community. We consider some of the main problems to be solved, and aspects of how optimal knowledge sharing might be accomplished.

  3. Pilot interaction with automated airborne decision making systems

    NASA Technical Reports Server (NTRS)

    Rouse, W. B.; Chu, Y. Y.; Greenstein, J. S.; Walden, R. S.

    1976-01-01

    An investigation was made of interaction between a human pilot and automated on-board decision making systems. Research was initiated on the topic of pilot problem solving in automated and semi-automated flight management systems and attempts were made to develop a model of human decision making in a multi-task situation. A study was made of allocation of responsibility between human and computer, and discussed were various pilot performance parameters with varying degrees of automation. Optimal allocation of responsibility between human and computer was considered and some theoretical results found in the literature were presented. The pilot as a problem solver was discussed. Finally the design of displays, controls, procedures, and computer aids for problem solving tasks in automated and semi-automated systems was considered.

  4. Design and Development of a High Speed Sorting System Based on Machine Vision Guiding

    NASA Astrophysics Data System (ADS)

    Zhang, Wenchang; Mei, Jiangping; Ding, Yabin

    In this paper, a vision-based control strategy to perform high speed pick-and-place tasks on automation product line is proposed, and relevant control software is develop. Using Delta robot to control a sucker to grasp disordered objects from one moving conveyer and then place them on the other in order. CCD camera gets one picture every time the conveyer moves a distance of ds. Objects position and shape are got after image processing. Target tracking method based on "Servo motor + synchronous conveyer" is used to fulfill the high speed porting operation real time. Experiments conducted on Delta robot sorting system demonstrate the efficiency and validity of the proposed vision-control strategy.

  5. Robotic vision techniques for space operations

    NASA Technical Reports Server (NTRS)

    Krishen, Kumar

    1994-01-01

    Automation and robotics for space applications are being pursued for increased productivity, enhanced reliability, increased flexibility, higher safety, and for the automation of time-consuming tasks and those activities which are beyond the capacity of the crew. One of the key functional elements of an automated robotic system is sensing and perception. As the robotics era dawns in space, vision systems will be required to provide the key sensory data needed for multifaceted intelligent operations. In general, the three-dimensional scene/object description, along with location, orientation, and motion parameters will be needed. In space, the absence of diffused lighting due to a lack of atmosphere gives rise to: (a) high dynamic range (10(exp 8)) of scattered sunlight intensities, resulting in very high contrast between shadowed and specular portions of the scene; (b) intense specular reflections causing target/scene bloom; and (c) loss of portions of the image due to shadowing and presence of stars, Earth, Moon, and other space objects in the scene. In this work, developments for combating the adverse effects described earlier and for enhancing scene definition are discussed. Both active and passive sensors are used. The algorithm for selecting appropriate wavelength, polarization, look angle of vision sensors is based on environmental factors as well as the properties of the target/scene which are to be perceived. The environment is characterized on the basis of sunlight and other illumination incident on the target/scene and the temperature profiles estimated on the basis of the incident illumination. The unknown geometrical and physical parameters are then derived from the fusion of the active and passive microwave, infrared, laser, and optical data.

  6. A multidisciplinary approach to solving computer related vision problems.

    PubMed

    Long, Jennifer; Helland, Magne

    2012-09-01

    This paper proposes a multidisciplinary approach to solving computer related vision issues by including optometry as a part of the problem-solving team. Computer workstation design is increasing in complexity. There are at least ten different professions who contribute to workstation design or who provide advice to improve worker comfort, safety and efficiency. Optometrists have a role identifying and solving computer-related vision issues and in prescribing appropriate optical devices. However, it is possible that advice given by optometrists to improve visual comfort may conflict with other requirements and demands within the workplace. A multidisciplinary approach has been advocated for solving computer related vision issues. There are opportunities for optometrists to collaborate with ergonomists, who coordinate information from physical, cognitive and organisational disciplines to enact holistic solutions to problems. This paper proposes a model of collaboration and examples of successful partnerships at a number of professional levels including individual relationships between optometrists and ergonomists when they have mutual clients/patients, in undergraduate and postgraduate education and in research. There is also scope for dialogue between optometry and ergonomics professional associations. A multidisciplinary approach offers the opportunity to solve vision related computer issues in a cohesive, rather than fragmented way. Further exploration is required to understand the barriers to these professional relationships. © 2012 The College of Optometrists.

  7. An Automatic Image-Based Modelling Method Applied to Forensic Infography

    PubMed Central

    Zancajo-Blazquez, Sandra; Gonzalez-Aguilera, Diego; Gonzalez-Jorge, Higinio; Hernandez-Lopez, David

    2015-01-01

    This paper presents a new method based on 3D reconstruction from images that demonstrates the utility and integration of close-range photogrammetry and computer vision as an efficient alternative to modelling complex objects and scenarios of forensic infography. The results obtained confirm the validity of the method compared to other existing alternatives as it guarantees the following: (i) flexibility, permitting work with any type of camera (calibrated and non-calibrated, smartphone or tablet) and image (visible, infrared, thermal, etc.); (ii) automation, allowing the reconstruction of three-dimensional scenarios in the absence of manual intervention, and (iii) high quality results, sometimes providing higher resolution than modern laser scanning systems. As a result, each ocular inspection of a crime scene with any camera performed by the scientific police can be transformed into a scaled 3d model. PMID:25793628

  8. Ingenious Snake: An Adaptive Multi-Class Contours Extraction

    NASA Astrophysics Data System (ADS)

    Li, Baolin; Zhou, Shoujun

    2018-04-01

    Active contour model (ACM) plays an important role in computer vision and medical image application. The traditional ACMs were used to extract single-class of object contours. While, simultaneous extraction of multi-class of interesting contours (i.e., various contours with closed- or open-ended) have not been solved so far. Therefore, a novel ACM model named “Ingenious Snake” is proposed to adaptively extract these interesting contours. In the first place, the ridge-points are extracted based on the local phase measurement of gradient vector flow field; the consequential ridgelines initialization are automated with high speed. Secondly, the contours’ deformation and evolvement are implemented with the ingenious snake. In the experiments, the result from initialization, deformation and evolvement are compared with the existing methods. The quantitative evaluation of the structure extraction is satisfying with respect of effectiveness and accuracy.

  9. Automated Monitoring and Analysis of Social Behavior in Drosophila

    PubMed Central

    Dankert, Heiko; Wang, Liming; Hoopfer, Eric D.; Anderson, David J.; Perona, Pietro

    2009-01-01

    We introduce a method based on machine vision for automatically measuring aggression and courtship in Drosophila melanogaster. The genetic and neural circuit bases of these innate social behaviors are poorly understood. High-throughput behavioral screening in this genetically tractable model organism is a potentially powerful approach, but it is currently very laborious. Our system monitors interacting pairs of flies, and computes their location, orientation and wing posture. These features are used for detecting behaviors exhibited during aggression and courtship. Among these, wing threat, lunging and tussling are specific to aggression; circling, wing extension (courtship “song”) and copulation are specific to courtship; locomotion and chasing are common to both. Ethograms may be constructed automatically from these measurements, saving considerable time and effort. This technology should enable large-scale screens for genes and neural circuits controlling courtship and aggression. PMID:19270697

  10. Job Prospects for Manufacturing Engineers.

    ERIC Educational Resources Information Center

    Basta, Nicholas

    1985-01-01

    Coming from a variety of disciplines, manufacturing engineers are keys to industry's efforts to modernize, with demand exceeding supply. The newest and fastest-growing areas include machine vision, composite materials, and manufacturing automation protocols, each of which is briefly discussed. (JN)

  11. VISION-BASED MONITORING AND CONTROL OF CONSTRUCTION OPERATIONS CARBON FOOTPRINT

    EPA Science Inventory

    Automated and continuous carbon footprint monitoring of construction operations support the contractors and project managers with information required for assessment on carbon footprint of various construction operation alternatives. This can ultimately lead to reduction of...

  12. Computer vision

    NASA Technical Reports Server (NTRS)

    Gennery, D.; Cunningham, R.; Saund, E.; High, J.; Ruoff, C.

    1981-01-01

    The field of computer vision is surveyed and assessed, key research issues are identified, and possibilities for a future vision system are discussed. The problems of descriptions of two and three dimensional worlds are discussed. The representation of such features as texture, edges, curves, and corners are detailed. Recognition methods are described in which cross correlation coefficients are maximized or numerical values for a set of features are measured. Object tracking is discussed in terms of the robust matching algorithms that must be devised. Stereo vision, camera control and calibration, and the hardware and systems architecture are discussed.

  13. Preoperative graft thickness measurements do not influence final BSCVA or speed of vision recovery after descemet stripping automated endothelial keratoplasty.

    PubMed

    Phillips, Paul M; Phillips, Louis J; Maloney, Charlene M

    2013-11-01

    To evaluate the influence of preoperative graft thickness (GT) on final visual acuity and speed of vision recovery after Descemet stripping automated endothelial keratoplasty (DSAEK). The best spectacle-corrected acuity (BSCVA) was measured after DSAEK was performed at 1, 3, 6, 12, and 24 months. A regression analysis was performed to determine whether GT predicted the BSCVA across each time gate. The time to achieve the "1-year maximum BSCVA" was determined to assess the "speed" of recovery for all eyes that had data at 1, 3, 6, and 12 months. Additionally, the final BSCVA was compared between 2 distinct groups of "thin" (<125-μm) versus "thick" (>165-μm) tissue. There were 144 eyes evaluated. No significant correlations were found between the GT and the BSCVA at any of the time gates: 1, 3, 6, 12, or 24 months. Speed of vision recovery was not affected by the GT. The average GT values of the eyes that achieved BSCVA by 1, 3, 6 months and 1 year were not significantly different and were 154.7, 141.3, 149, and 150.1 μm, respectively. No difference was found between the BSCVA of "thick" versus "thin" tissues at any of the time gates: 1, 3, 6, or 12 months. Preoperative GT measurements were not correlated with the BSCVA after the DSAEK was performed at 1, 6, 12, or 24 months postoperatively and do not determine the speed of vision recovery. Additionally, no difference was found in postoperative vision outcomes when directly comparing tissues at either end of the GT spectrum of this study.

  14. Comparative randomised active drug controlled clinical trial of a herbal eye drop in computer vision syndrome.

    PubMed

    Chatterjee, Pranab Kr; Bairagi, Debasis; Roy, Sudipta; Majumder, Nilay Kr; Paul, Ratish Ch; Bagchi, Sunil Ch

    2005-07-01

    A comparative double-blind placebo-controlled clinical trial of a herbal eye drop (itone) was conducted to find out its efficacy and safety in 120 patients with computer vision syndrome. Patients using computers for more than 3 hours continuously per day having symptoms of watering, redness, asthenia, irritation, foreign body sensation and signs of conjunctival hyperaemia, corneal filaments and mucus were studied. One hundred and twenty patients were randomly given either placebo, tears substitute (tears plus) or itone in identical vials with specific code number and were instructed to put one drop four times daily for 6 weeks. Subjective and objective assessments were done at bi-weekly intervals. In computer vision syndrome both subjective and objective improvements were noticed with itone drops. Itone drop was found significantly better than placebo (p<0.01) and almost identical results were observed with tears plus (difference was not statistically significant). Itone is considered to be a useful drug in computer vision syndrome.

  15. 3D marker-controlled watershed for kidney segmentation in clinical CT exams.

    PubMed

    Wieclawek, Wojciech

    2018-02-27

    Image segmentation is an essential and non trivial task in computer vision and medical image analysis. Computed tomography (CT) is one of the most accessible medical examination techniques to visualize the interior of a patient's body. Among different computer-aided diagnostic systems, the applications dedicated to kidney segmentation represent a relatively small group. In addition, literature solutions are verified on relatively small databases. The goal of this research is to develop a novel algorithm for fully automated kidney segmentation. This approach is designed for large database analysis including both physiological and pathological cases. This study presents a 3D marker-controlled watershed transform developed and employed for fully automated CT kidney segmentation. The original and the most complex step in the current proposition is an automatic generation of 3D marker images. The final kidney segmentation step is an analysis of the labelled image obtained from marker-controlled watershed transform. It consists of morphological operations and shape analysis. The implementation is conducted in a MATLAB environment, Version 2017a, using i.a. Image Processing Toolbox. 170 clinical CT abdominal studies have been subjected to the analysis. The dataset includes normal as well as various pathological cases (agenesis, renal cysts, tumors, renal cell carcinoma, kidney cirrhosis, partial or radical nephrectomy, hematoma and nephrolithiasis). Manual and semi-automated delineations have been used as a gold standard. Wieclawek Among 67 delineated medical cases, 62 cases are 'Very good', whereas only 5 are 'Good' according to Cohen's Kappa interpretation. The segmentation results show that mean values of Sensitivity, Specificity, Dice, Jaccard, Cohen's Kappa and Accuracy are 90.29, 99.96, 91.68, 85.04, 91.62 and 99.89% respectively. All 170 medical cases (with and without outlines) have been classified by three independent medical experts as 'Very good' in 143-148 cases, as 'Good' in 15-21 cases and as 'Moderate' in 6-8 cases. An automatic kidney segmentation approach for CT studies to compete with commonly known solutions was developed. The algorithm gives promising results, that were confirmed during validation procedure done on a relatively large database, including 170 CTs with both physiological and pathological cases.

  16. Automated tissue classification of intracardiac optical coherence tomography images (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Gan, Yu; Tsay, David; Amir, Syed B.; Marboe, Charles C.; Hendon, Christine P.

    2016-03-01

    Remodeling of the myocardium is associated with increased risk of arrhythmia and heart failure. Our objective is to automatically identify regions of fibrotic myocardium, dense collagen, and adipose tissue, which can serve as a way to guide radiofrequency ablation therapy or endomyocardial biopsies. Using computer vision and machine learning, we present an automated algorithm to classify tissue compositions from cardiac optical coherence tomography (OCT) images. Three dimensional OCT volumes were obtained from 15 human hearts ex vivo within 48 hours of donor death (source, NDRI). We first segmented B-scans using a graph searching method. We estimated the boundary of each region by minimizing a cost function, which consisted of intensity, gradient, and contour smoothness. Then, features, including texture analysis, optical properties, and statistics of high moments, were extracted. We used a statistical model, relevance vector machine, and trained this model with abovementioned features to classify tissue compositions. To validate our method, we applied our algorithm to 77 volumes. The datasets for validation were manually segmented and classified by two investigators who were blind to our algorithm results and identified the tissues based on trichrome histology and pathology. The difference between automated and manual segmentation was 51.78 +/- 50.96 μm. Experiments showed that the attenuation coefficients of dense collagen were significantly different from other tissue types (P < 0.05, ANOVA). Importantly, myocardial fibrosis tissues were different from normal myocardium in entropy and kurtosis. The tissue types were classified with an accuracy of 84%. The results show good agreements with histology.

  17. Automated Selection Of Pictures In Sequences

    NASA Technical Reports Server (NTRS)

    Rorvig, Mark E.; Shelton, Robert O.

    1995-01-01

    Method of automated selection of film or video motion-picture frames for storage or examination developed. Beneficial in situations in which quantity of visual information available exceeds amount stored or examined by humans in reasonable amount of time, and/or necessary to reduce large number of motion-picture frames to few conveying significantly different information in manner intermediate between movie and comic book or storyboard. For example, computerized vision system monitoring industrial process programmed to sound alarm when changes in scene exceed normal limits.

  18. Neural network expert system for X-ray analysis of welded joints

    NASA Astrophysics Data System (ADS)

    Kozlov, V. V.; Lapik, N. V.; Popova, N. V.

    2018-03-01

    The use of intelligent technologies for the automated analysis of product quality is one of the main trends in modern machine building. At the same time, rapid development in various spheres of human activity is experienced by methods associated with the use of artificial neural networks, as the basis for building automated intelligent diagnostic systems. Technologies of machine vision allow one to effectively detect the presence of certain regularities in the analyzed designation, including defects of welded joints according to radiography data.

  19. Integrating Mobile Robotics and Vision with Undergraduate Computer Science

    ERIC Educational Resources Information Center

    Cielniak, G.; Bellotto, N.; Duckett, T.

    2013-01-01

    This paper describes the integration of robotics education into an undergraduate Computer Science curriculum. The proposed approach delivers mobile robotics as well as covering the closely related field of Computer Vision and is directly linked to the research conducted at the authors' institution. The paper describes the most relevant details of…

  20. Parallel computer vision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uhr, L.

    1987-01-01

    This book is written by research scientists involved in the development of massively parallel, but hierarchically structured, algorithms, architectures, and programs for image processing, pattern recognition, and computer vision. The book gives an integrated picture of the programs and algorithms that are being developed, and also of the multi-computer hardware architectures for which these systems are designed.

  1. Rationale, Design and Implementation of a Computer Vision-Based Interactive E-Learning System

    ERIC Educational Resources Information Center

    Xu, Richard Y. D.; Jin, Jesse S.

    2007-01-01

    This article presents a schematic application of computer vision technologies to e-learning that is synchronous, peer-to-peer-based, and supports an instructor's interaction with non-computer teaching equipments. The article first discusses the importance of these focused e-learning areas, where the properties include accurate bidirectional…

  2. Computer Vision Assisted Virtual Reality Calibration

    NASA Technical Reports Server (NTRS)

    Kim, W.

    1999-01-01

    A computer vision assisted semi-automatic virtual reality (VR) calibration technology has been developed that can accurately match a virtual environment of graphically simulated three-dimensional (3-D) models to the video images of the real task environment.

  3. Tracking by Identification Using Computer Vision and Radio

    PubMed Central

    Mandeljc, Rok; Kovačič, Stanislav; Kristan, Matej; Perš, Janez

    2013-01-01

    We present a novel system for detection, localization and tracking of multiple people, which fuses a multi-view computer vision approach with a radio-based localization system. The proposed fusion combines the best of both worlds, excellent computer-vision-based localization, and strong identity information provided by the radio system, and is therefore able to perform tracking by identification, which makes it impervious to propagated identity switches. We present comprehensive methodology for evaluation of systems that perform person localization in world coordinate system and use it to evaluate the proposed system as well as its components. Experimental results on a challenging indoor dataset, which involves multiple people walking around a realistically cluttered room, confirm that proposed fusion of both systems significantly outperforms its individual components. Compared to the radio-based system, it achieves better localization results, while at the same time it successfully prevents propagation of identity switches that occur in pure computer-vision-based tracking. PMID:23262485

  4. 45 CFR 309.145 - What costs are allowable for Tribal IV-D programs carried out under § 309.65(a) of this part?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    .... (h) Automated data processing computer systems, including: (1) Planning efforts in the identification, evaluation, and selection of an automated data processing computer system solution meeting the program... existing automated data processing computer system to support Tribal IV-D program operations, and...

  5. 45 CFR 309.145 - What costs are allowable for Tribal IV-D programs carried out under § 309.65(a) of this part?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    .... (h) Automated data processing computer systems, including: (1) Planning efforts in the identification, evaluation, and selection of an automated data processing computer system solution meeting the program... existing automated data processing computer system to support Tribal IV-D program operations, and...

  6. 45 CFR 309.145 - What costs are allowable for Tribal IV-D programs carried out under § 309.65(a) of this part?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    .... (h) Automated data processing computer systems, including: (1) Planning efforts in the identification, evaluation, and selection of an automated data processing computer system solution meeting the program... existing automated data processing computer system to support Tribal IV-D program operations, and...

  7. 45 CFR 309.145 - What costs are allowable for Tribal IV-D programs carried out under § 309.65(a) of this part?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    .... (h) Automated data processing computer systems, including: (1) Planning efforts in the identification, evaluation, and selection of an automated data processing computer system solution meeting the program... existing automated data processing computer system to support Tribal IV-D program operations, and...

  8. 45 CFR 309.145 - What costs are allowable for Tribal IV-D programs carried out under § 309.65(a) of this part?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    .... (h) Automated data processing computer systems, including: (1) Planning efforts in the identification, evaluation, and selection of an automated data processing computer system solution meeting the program... existing automated data processing computer system to support Tribal IV-D program operations, and...

  9. 45 CFR 286.205 - How will we determine if a Tribe fails to meet the minimum work participation rate(s)?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ..., financial records, and automated data systems; (ii) The data are free from computational errors and are... records, financial records, and automated data systems; (ii) The data are free from computational errors... records, and automated data systems; (ii) The data are free from computational errors and are internally...

  10. Automated image quality assessment for chest CT scans.

    PubMed

    Reeves, Anthony P; Xie, Yiting; Liu, Shuang

    2018-02-01

    Medical image quality needs to be maintained at standards sufficient for effective clinical reading. Automated computer analytic methods may be applied to medical images for quality assessment. For chest CT scans in a lung cancer screening context, an automated quality assessment method is presented that characterizes image noise and image intensity calibration. This is achieved by image measurements in three automatically segmented homogeneous regions of the scan: external air, trachea lumen air, and descending aorta blood. Profiles of CT scanner behavior are also computed. The method has been evaluated on both phantom and real low-dose chest CT scans and results show that repeatable noise and calibration measures may be realized by automated computer algorithms. Noise and calibration profiles show relevant differences between different scanners and protocols. Automated image quality assessment may be useful for quality control for lung cancer screening and may enable performance improvements to automated computer analysis methods. © 2017 American Association of Physicists in Medicine.

  11. Toothguide Trainer tests with color vision deficiency simulation monitor.

    PubMed

    Borbély, Judit; Varsányi, Balázs; Fejérdy, Pál; Hermann, Péter; Jakstat, Holger A

    2010-01-01

    The aim of this study was to evaluate whether simulated severe red and green color vision deficiency (CVD) influenced color matching results and to investigate whether training with Toothguide Trainer (TT) computer program enabled better color matching results. A total of 31 color normal dental students participated in the study. Every participant had to pass the Ishihara Test. Participants with a red/green color vision deficiency were excluded. A lecture on tooth color matching was given, and individual training with TT was performed. To measure the individual tooth color matching results in normal and color deficient display modes, the TT final exam was displayed on a calibrated monitor that served as a hardware-based method of simulating protanopy and deuteranopy. Data from the TT final exams were collected in normal and in severe red and green CVD-simulating monitor display modes. Color difference values for each participant in each display mode were computed (∑ΔE(ab)(*)), and the respective means and standard deviations were calculated. The Student's t-test was used in statistical evaluation. Participants made larger ΔE(ab)(*) errors in severe color vision deficient display modes than in the normal monitor mode. TT tests showed significant (p<0.05) difference in the tooth color matching results of severe green color vision deficiency simulation mode compared to normal vision mode. Students' shade matching results were significantly better after training (p=0.009). Computer-simulated severe color vision deficiency mode resulted in significantly worse color matching quality compared to normal color vision mode. Toothguide Trainer computer program improved color matching results. Copyright © 2010 Elsevier Ltd. All rights reserved.

  12. Image formation simulation for computer-aided inspection planning of machine vision systems

    NASA Astrophysics Data System (ADS)

    Irgenfried, Stephan; Bergmann, Stephan; Mohammadikaji, Mahsa; Beyerer, Jürgen; Dachsbacher, Carsten; Wörn, Heinz

    2017-06-01

    In this work, a simulation toolset for Computer Aided Inspection Planning (CAIP) of systems for automated optical inspection (AOI) is presented along with a versatile two-robot-setup for verification of simulation and system planning results. The toolset helps to narrow down the large design space of optical inspection systems in interaction with a system expert. The image formation taking place in optical inspection systems is simulated using GPU-based real time graphics and high quality off-line-rendering. The simulation pipeline allows a stepwise optimization of the system, from fast evaluation of surface patch visibility based on real time graphics up to evaluation of image processing results based on off-line global illumination calculation. A focus of this work is on the dependency of simulation quality on measuring, modeling and parameterizing the optical surface properties of the object to be inspected. The applicability to real world problems is demonstrated by taking the example of planning a 3D laser scanner application. Qualitative and quantitative comparison results of synthetic and real images are presented.

  13. Computer image analysis in caryopses quality evaluation as exemplified by malting barley

    NASA Astrophysics Data System (ADS)

    Koszela, K.; Raba, B.; Zaborowicz, M.; Przybył, K.; Wojcieszak, D.; Czekała, W.; Ludwiczak, A.; Przybylak, A.; Boniecki, P.; Przybył, J.

    2015-07-01

    One of the purposes to employ modern technologies in agricultural and food industry is to increase the efficiency and automation of production processes, which helps improve productive effectiveness of business enterprises, thus making them more competitive. Nowadays, a challenge presents itself for this branch of economy, to produce agricultural and food products characterized by the best parameters in terms of quality, while maintaining optimum production and distribution costs of the processed biological material. Thus, several scientific centers seek to devise new and improved methods and technologies in this field, which will allow to meet the expectations. A new solution, under constant development, is to employ the so-called machine vision which is to replace human work in both quality and quantity evaluation processes. An indisputable advantage of employing the method is keeping the evaluation unbiased while improving its rate and, what is important, eliminating the fatigue factor of the expert. This paper elaborates on the topic of quality evaluation by marking the contamination in malting barley grains using computer image analysis and selected methods of artificial intelligence [4-5].

  14. [Meibomian gland disfunction in computer vision syndrome].

    PubMed

    Pimenidi, M K; Polunin, G S; Safonova, T N

    2010-01-01

    This article reviews ethiology and pathogenesis of dry eye syndrome due to meibomian gland disfunction (MDG). It is showed that blink rate influences meibomian gland functioning and computer vision syndrome development. Current diagnosis and treatment options of MDG are presented.

  15. Analog "neuronal" networks in early vision.

    PubMed Central

    Koch, C; Marroquin, J; Yuille, A

    1986-01-01

    Many problems in early vision can be formulated in terms of minimizing a cost function. Examples are shape from shading, edge detection, motion analysis, structure from motion, and surface interpolation. As shown by Poggio and Koch [Poggio, T. & Koch, C. (1985) Proc. R. Soc. London, Ser. B 226, 303-323], quadratic variational problems, an important subset of early vision tasks, can be "solved" by linear, analog electrical, or chemical networks. However, in the presence of discontinuities, the cost function is nonquadratic, raising the question of designing efficient algorithms for computing the optimal solution. Recently, Hopfield and Tank [Hopfield, J. J. & Tank, D. W. (1985) Biol. Cybern. 52, 141-152] have shown that networks of nonlinear analog "neurons" can be effective in computing the solution of optimization problems. We show how these networks can be generalized to solve the nonconvex energy functionals of early vision. We illustrate this approach by implementing a specific analog network, solving the problem of reconstructing a smooth surface from sparse data while preserving its discontinuities. These results suggest a novel computational strategy for solving early vision problems in both biological and real-time artificial vision systems. PMID:3459172

  16. The diagnostic performance of expert dermoscopists vs a computer-vision system on small-diameter melanomas.

    PubMed

    Friedman, Robert J; Gutkowicz-Krusin, Dina; Farber, Michele J; Warycha, Melanie; Schneider-Kels, Lori; Papastathis, Nicole; Mihm, Martin C; Googe, Paul; King, Roy; Prieto, Victor G; Kopf, Alfred W; Polsky, David; Rabinovitz, Harold; Oliviero, Margaret; Cognetta, Armand; Rigel, Darrell S; Marghoob, Ashfaq; Rivers, Jason; Johr, Robert; Grant-Kels, Jane M; Tsao, Hensin

    2008-04-01

    To evaluate the performance of dermoscopists in diagnosing small pigmented skin lesions (diameter

  17. Miniaturisation of Pressure-Sensitive Paint Measurement Systems Using Low-Cost, Miniaturised Machine Vision Cameras.

    PubMed

    Quinn, Mark Kenneth; Spinosa, Emanuele; Roberts, David A

    2017-07-25

    Measurements of pressure-sensitive paint (PSP) have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access.

  18. Miniaturisation of Pressure-Sensitive Paint Measurement Systems Using Low-Cost, Miniaturised Machine Vision Cameras

    PubMed Central

    Spinosa, Emanuele; Roberts, David A.

    2017-01-01

    Measurements of pressure-sensitive paint (PSP) have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access. PMID:28757553

  19. Efficient large-scale graph data optimization for intelligent video surveillance

    NASA Astrophysics Data System (ADS)

    Shang, Quanhong; Zhang, Shujun; Wang, Yanbo; Sun, Chen; Wang, Zepeng; Zhang, Luming

    2017-08-01

    Society is rapidly accepting the use of a wide variety of cameras Location and applications: site traffic monitoring, parking Lot surveillance, car and smart space. These ones here the camera provides data every day in an analysis Effective way. Recent advances in sensor technology Manufacturing, communications and computing are stimulating.The development of new applications that can change the traditional Vision system incorporating universal smart camera network. This Analysis of visual cues in multi camera networks makes wide Applications ranging from smart home and office automation to large area surveillance and traffic surveillance. In addition, dense Camera networks, most of which have large overlapping areas of cameras. In the view of good research, we focus on sparse camera networks. One Sparse camera network using large area surveillance. As few cameras as possible, most cameras do not overlap Each other’s field of vision. This task is challenging Lack of knowledge of topology Network, the specific changes in appearance and movement Track different opinions of the target, as well as difficulties Understanding complex events in a network. In this review in this paper, we present a comprehensive survey of recent studies Results to solve the problem of topology learning, Object appearance modeling and global activity understanding sparse camera network. In addition, some of the current open Research issues are discussed.

  20. Taming Crowded Visual Scenes

    DTIC Science & Technology

    2014-08-12

    Nolan Warner, Mubarak Shah. Tracking in Dense Crowds Using Prominenceand Neighborhood Motion Concurrence, IEEE Transactions on Pattern Analysis...of  computer  vision,   computer   graphics  and  evacuation  dynamics  by  providing  a  common  platform,  and  provides...areas  that  includes  Computer  Vision,  Computer   Graphics ,  and  Pedestrian   Evacuation  Dynamics.  Despite  the

  1. Computer vision syndrome: a review of ocular causes and potential treatments.

    PubMed

    Rosenfield, Mark

    2011-09-01

    Computer vision syndrome (CVS) is the combination of eye and vision problems associated with the use of computers. In modern western society the use of computers for both vocational and avocational activities is almost universal. However, CVS may have a significant impact not only on visual comfort but also occupational productivity since between 64% and 90% of computer users experience visual symptoms which may include eyestrain, headaches, ocular discomfort, dry eye, diplopia and blurred vision either at near or when looking into the distance after prolonged computer use. This paper reviews the principal ocular causes for this condition, namely oculomotor anomalies and dry eye. Accommodation and vergence responses to electronic screens appear to be similar to those found when viewing printed materials, whereas the prevalence of dry eye symptoms is greater during computer operation. The latter is probably due to a decrease in blink rate and blink amplitude, as well as increased corneal exposure resulting from the monitor frequently being positioned in primary gaze. However, the efficacy of proposed treatments to reduce symptoms of CVS is unproven. A better understanding of the physiology underlying CVS is critical to allow more accurate diagnosis and treatment. This will enable practitioners to optimize visual comfort and efficiency during computer operation. Ophthalmic & Physiological Optics © 2011 The College of Optometrists.

  2. An Enduring Dialogue between Computational and Empirical Vision.

    PubMed

    Martinez-Conde, Susana; Macknik, Stephen L; Heeger, David J

    2018-04-01

    In the late 1970s, key discoveries in neurophysiology, psychophysics, computer vision, and image processing had reached a tipping point that would shape visual science for decades to come. David Marr and Ellen Hildreth's 'Theory of edge detection', published in 1980, set out to integrate the newly available wealth of data from behavioral, physiological, and computational approaches in a unifying theory. Although their work had wide and enduring ramifications, their most important contribution may have been to consolidate the foundations of the ongoing dialogue between theoretical and empirical vision science. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Automated surface quality inspection with ARGOS: a case study

    NASA Astrophysics Data System (ADS)

    Kiefhaber, Daniel; Etzold, Fabian; Warken, Arno F.; Asfour, Jean-Michel

    2017-06-01

    The commercial availability of automated inspection systems for optical surfaces specified according to ISO 10110-7 promises unsupervised and automated quality control with reproducible results. In this study, the classification results of the ARGOS inspection system are compared to the decisions by well-trained inspectors based on manual-visual inspection. Both are found to agree in 93.6% of the studied cases. Exemplary cases with differing results are studied, and shown to be partly caused by shortcomings of the ISO 10110-7 standard, which was written for the industry standard manual-visual inspection. Applying it to high resolution images of the whole surface of objective machine vision systems brings with it a few challenges which are discussed.

  4. Continuous monitoring of the lunar or Martian subsurface using on-board pattern recognition and neural processing of Rover geophysical data

    NASA Technical Reports Server (NTRS)

    Glass, Charles E.; Boyd, Richard V.; Sternberg, Ben K.

    1991-01-01

    The overall aim is to provide base technology for an automated vision system for on-board interpretation of geophysical data. During the first year's work, it was demonstrated that geophysical data can be treated as patterns and interpreted using single neural networks. Current research is developing an integrated vision system comprising neural networks, algorithmic preprocessing, and expert knowledge. This system is to be tested incrementally using synthetic geophysical patterns, laboratory generated geophysical patterns, and field geophysical patterns.

  5. Development and Testing of Optimized Autonomous and Connected Vehicle Trajectories at Signalized Intersections [summary

    DOT National Transportation Integrated Search

    2017-12-01

    Visions of self-driving vehicles abound in popular science and entertainment. Many programs are at work to make a reality catch of this imagination. Vehicle automation has progressed rapidly in recent years, from simple driver assistance technologies...

  6. Low computation vision-based navigation for a Martian rover

    NASA Technical Reports Server (NTRS)

    Gavin, Andrew S.; Brooks, Rodney A.

    1994-01-01

    Construction and design details of the Mobot Vision System, a small, self-contained, mobile vision system, are presented. This system uses the view from the top of a small, roving, robotic vehicle to supply data that is processed in real-time to safely navigate the surface of Mars. A simple, low-computation algorithm for constructing a 3-D navigational map of the Martian environment to be used by the rover is discussed.

  7. Computational models of human vision with applications

    NASA Technical Reports Server (NTRS)

    Wandell, B. A.

    1985-01-01

    Perceptual problems in aeronautics were studied. The mechanism by which color constancy is achieved in human vision was examined. A computable algorithm was developed to model the arrangement of retinal cones in spatial vision. The spatial frequency spectra are similar to the spectra of actual cone mosaics. The Hartley transform as a tool of image processing was evaluated and it is suggested that it could be used in signal processing applications, GR image processing.

  8. Automated Help System For A Supercomputer

    NASA Technical Reports Server (NTRS)

    Callas, George P.; Schulbach, Catherine H.; Younkin, Michael

    1994-01-01

    Expert-system software developed to provide automated system of user-helping displays in supercomputer system at Ames Research Center Advanced Computer Facility. Users located at remote computer terminals connected to supercomputer and each other via gateway computers, local-area networks, telephone lines, and satellite links. Automated help system answers routine user inquiries about how to use services of computer system. Available 24 hours per day and reduces burden on human experts, freeing them to concentrate on helping users with complicated problems.

  9. Computer Programs For Automated Welding System

    NASA Technical Reports Server (NTRS)

    Agapakis, John E.

    1993-01-01

    Computer programs developed for use in controlling automated welding system described in MFS-28578. Together with control computer, computer input and output devices and control sensors and actuators, provide flexible capability for planning and implementation of schemes for automated welding of specific workpieces. Developed according to macro- and task-level programming schemes, which increases productivity and consistency by reducing amount of "teaching" of system by technician. System provides for three-dimensional mathematical modeling of workpieces, work cells, robots, and positioners.

  10. Computer vision syndrome-A common cause of unexplained visual symptoms in the modern era.

    PubMed

    Munshi, Sunil; Varghese, Ashley; Dhar-Munshi, Sushma

    2017-07-01

    The aim of this study was to assess the evidence and available literature on the clinical, pathogenetic, prognostic and therapeutic aspects of Computer vision syndrome. Information was collected from Medline, Embase & National Library of Medicine over the last 30 years up to March 2016. The bibliographies of relevant articles were searched for additional references. Patients with Computer vision syndrome present to a variety of different specialists, including General Practitioners, Neurologists, Stroke physicians and Ophthalmologists. While the condition is common, there is a poor awareness in the public and among health professionals. Recognising this condition in the clinic or in emergency situations like the TIA clinic is crucial. The implications are potentially huge in view of the extensive and widespread use of computers and visual display units. Greater public awareness of Computer vision syndrome and education of health professionals is vital. Preventive strategies should form part of work place ergonomics routinely. Prompt and correct recognition is important to allow management and avoid unnecessary treatments. © 2017 John Wiley & Sons Ltd.

  11. Comparative randomised controlled clinical trial of a herbal eye drop with artificial tear and placebo in computer vision syndrome.

    PubMed

    Biswas, N R; Nainiwal, S K; Das, G K; Langan, U; Dadeya, S C; Mongre, P K; Ravi, A K; Baidya, P

    2003-03-01

    A comparative randomised double masked multicentric clinical trial has been conducted to find out the efficacy and safety of a herbal eye drop preparation, itone eye drops with artificial tear and placebo in 120 patients with computer vision syndrome. Patients using computer for at least 2 hours continuosly per day having symptoms of irritation, foreign body sensation, watering, redness, headache, eyeache and signs of conjunctival congestion, mucous/debris, corneal filaments, corneal staining or lacrimal lake were included in this study. Every patient was instructed to put two drops of either herbal drugs or placebo or artificial tear in the eyes regularly four times for 6 weeks. Objective and subjective findings were recorded at bi-weekly intervals up to six weeks. Side-effects, if any, were also noted. In computer vision syndrome the herbal eye drop preparation was found significantly better than artificial tear (p < 0.01). No side-effects were noted by any of the drugs. Both subjective and objective improvements were observed in itone treated cases. So, itone can be considered as a useful drug in computer vision syndrome.

  12. Computer vision syndrome in presbyopia and beginning presbyopia: effects of spectacle lens type.

    PubMed

    Jaschinski, Wolfgang; König, Mirjam; Mekontso, Tiofil M; Ohlendorf, Arne; Welscher, Monique

    2015-05-01

    This office field study investigated the effects of different types of spectacle lenses habitually worn by computer users with presbyopia and in the beginning stages of presbyopia. Computer vision syndrome was assessed through reported complaints and ergonomic conditions. A questionnaire regarding the type of habitually worn near-vision lenses at the workplace, visual conditions and the levels of different types of complaints was administered to 175 participants aged 35 years and older (mean ± SD: 52.0 ± 6.7 years). Statistical factor analysis identified five specific aspects of the complaints. Workplace conditions were analysed based on photographs taken in typical working conditions. In the subgroup of 25 users between the ages of 36 and 57 years (mean 44 ± 5 years), who wore distance-vision lenses and performed more demanding occupational tasks, the reported extents of 'ocular strain', 'musculoskeletal strain' and 'headache' increased with the daily duration of computer work and explained up to 44 per cent of the variance (rs = 0.66). In the other subgroups, this effect was smaller, while in the complete sample (n = 175), this correlation was approximately rs = 0.2. The subgroup of 85 general-purpose progressive lens users (mean age 54 years) adopted head inclinations that were approximately seven degrees more elevated than those of the subgroups with single vision lenses. The present questionnaire was able to assess the complaints of computer users depending on the type of spectacle lenses worn. A missing near-vision addition among participants in the early stages of presbyopia was identified as a risk factor for complaints among those with longer daily durations of demanding computer work. © 2015 The Authors. Clinical and Experimental Optometry © 2015 Optometry Australia.

  13. Image understanding systems based on the unifying representation of perceptual and conceptual information and the solution of mid-level and high-level vision problems

    NASA Astrophysics Data System (ADS)

    Kuvychko, Igor

    2001-10-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, that is an interpretation of visual information in terms of such knowledge models. A computer vision system based on such principles requires unifying representation of perceptual and conceptual information. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/networks models is found. That means a very important shift of paradigm in our knowledge about brain from neural networks to the cortical software. Starting from the primary visual areas, brain analyzes an image as a graph-type spatial structure. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. The spatial combination of different neighbor features cannot be described as a statistical/integral characteristic of the analyzed region, but uniquely characterizes such region itself. Spatial logic and topology naturally present in such structures. Mid-level vision processes like clustering, perceptual grouping, multilevel hierarchical compression, separation of figure from ground, etc. are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena like shape from shading, occlusion, etc. are results of such analysis. Such approach gives opportunity not only to explain frequently unexplainable results of the cognitive science, but also to create intelligent computer vision systems that simulate perceptional processes in both what and where visual pathways. Such systems can open new horizons for robotic and computer vision industries.

  14. USSR Report: Cybernetics, Computers and Automation Technology. No. 69.

    DTIC Science & Technology

    1983-05-06

    computers in multiprocessor and multistation design , control and scientific research automation systems. The results of comparing the efficiency of...Podvizhnaya, Scientific Research Institute of Control Computers, Severodonetsk] [Text] The most significant change in the design of the SM-2M compared to...UPRAVLYAYUSHCHIYE SISTEMY I MASHINY, Nov-Dec 82) 95 APPLICATIONS Kiev Automated Control System, Design Features and Prospects for Development (V. A

  15. The Logical Extension

    NASA Technical Reports Server (NTRS)

    2003-01-01

    The same software controlling autonomous and crew-assisted operations for the International Space Station (ISS) is enabling commercial enterprises to integrate and automate manual operations, also known as decision logic, in real time across complex and disparate networked applications, databases, servers, and other devices, all with quantifiable business benefits. Auspice Corporation, of Framingham, Massachusetts, developed the Auspice TLX (The Logical Extension) software platform to effectively mimic the human decision-making process. Auspice TLX automates operations across extended enterprise systems, where any given infrastructure can include thousands of computers, servers, switches, and modems that are connected, and therefore, dependent upon each other. The concept behind the Auspice software spawned from a computer program originally developed in 1981 by Cambridge, Massachusetts-based Draper Laboratory for simulating tasks performed by astronauts aboard the Space Shuttle. At the time, the Space Shuttle Program was dependent upon paper-based procedures for its manned space missions, which typically averaged 2 weeks in duration. As the Shuttle Program progressed, NASA began increasing the length of manned missions in preparation for a more permanent space habitat. Acknowledging the need to relinquish paper-based procedures in favor of an electronic processing format to properly monitor and manage the complexities of these longer missions, NASA realized that Draper's task simulation software could be applied to its vision of year-round space occupancy. In 1992, Draper was awarded a NASA contract to build User Interface Language software to enable autonomous operations of a multitude of functions on Space Station Freedom (the station was redesigned in 1993 and converted into the international venture known today as the ISS)

  16. Computer vision syndrome (CVS) - Thermographic Analysis

    NASA Astrophysics Data System (ADS)

    Llamosa-Rincón, L. E.; Jaime-Díaz, J. M.; Ruiz-Cardona, D. F.

    2017-01-01

    The use of computers has reported an exponential growth in the last decades, the possibility of carrying out several tasks for both professional and leisure purposes has contributed to the great acceptance by the users. The consequences and impact of uninterrupted tasks with computers screens or displays on the visual health, have grabbed researcher’s attention. When spending long periods of time in front of a computer screen, human eyes are subjected to great efforts, which in turn triggers a set of symptoms known as Computer Vision Syndrome (CVS). Most common of them are: blurred vision, visual fatigue and Dry Eye Syndrome (DES) due to unappropriate lubrication of ocular surface when blinking decreases. An experimental protocol was de-signed and implemented to perform thermographic studies on healthy human eyes during exposure to dis-plays of computers, with the main purpose of comparing the existing differences in temperature variations of healthy ocular surfaces.

  17. Teaching and implementing autonomous robotic lab walkthroughs in a biotech laboratory through model-based visual tracking

    NASA Astrophysics Data System (ADS)

    Wojtczyk, Martin; Panin, Giorgio; Röder, Thorsten; Lenz, Claus; Nair, Suraj; Heidemann, Rüdiger; Goudar, Chetan; Knoll, Alois

    2010-01-01

    After utilizing robots for more than 30 years for classic industrial automation applications, service robots form a constantly increasing market, although the big breakthrough is still awaited. Our approach to service robots was driven by the idea of supporting lab personnel in a biotechnology laboratory. After initial development in Germany, a mobile robot platform extended with an industrial manipulator and the necessary sensors for indoor localization and object manipulation, has been shipped to Bayer HealthCare in Berkeley, CA, USA, a global player in the sector of biopharmaceutical products, located in the San Francisco bay area. The determined goal of the mobile manipulator is to support the off-shift staff to carry out completely autonomous or guided, remote controlled lab walkthroughs, which we implement utilizing a recent development of our computer vision group: OpenTL - an integrated framework for model-based visual tracking.

  18. Image analysis of ocular fundus for retinopathy characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ushizima, Daniela; Cuadros, Jorge

    2010-02-05

    Automated analysis of ocular fundus images is a common procedure in countries as England, including both nonemergency examination and retinal screening of patients with diabetes mellitus. This involves digital image capture and transmission of the images to a digital reading center for evaluation and treatment referral. In collaboration with the Optometry Department, University of California, Berkeley, we have tested computer vision algorithms to segment vessels and lesions in ground-truth data (DRIVE database) and hundreds of images of non-macular centric and nonuniform illumination views of the eye fundus from EyePACS program. Methods under investigation involve mathematical morphology (Figure 1) for imagemore » enhancement and pattern matching. Recently, we have focused in more efficient techniques to model the ocular fundus vasculature (Figure 2), using deformable contours. Preliminary results show accurate segmentation of vessels and high level of true-positive microaneurysms.« less

  19. Automatic diet monitoring: a review of computer vision and wearable sensor-based methods.

    PubMed

    Hassannejad, Hamid; Matrella, Guido; Ciampolini, Paolo; De Munari, Ilaria; Mordonini, Monica; Cagnoni, Stefano

    2017-09-01

    Food intake and eating habits have a significant impact on people's health. Widespread diseases, such as diabetes and obesity, are directly related to eating habits. Therefore, monitoring diet can be a substantial base for developing methods and services to promote healthy lifestyle and improve personal and national health economy. Studies have demonstrated that manual reporting of food intake is inaccurate and often impractical. Thus, several methods have been proposed to automate the process. This article reviews the most relevant and recent researches on automatic diet monitoring, discussing their strengths and weaknesses. In particular, the article reviews two approaches to this problem, accounting for most of the work in the area. The first approach is based on image analysis and aims at extracting information about food content automatically from food images. The second one relies on wearable sensors and has the detection of eating behaviours as its main goal.

  20. Image-algebraic design of multispectral target recognition algorithms

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.

    1994-06-01

    In this paper, we discuss methods for multispectral ATR (Automated Target Recognition) of small targets that are sensed under suboptimal conditions, such as haze, smoke, and low light levels. In particular, we discuss our ongoing development of algorithms and software that effect intelligent object recognition by selecting ATR filter parameters according to ambient conditions. Our algorithms are expressed in terms of IA (image algebra), a concise, rigorous notation that unifies linear and nonlinear mathematics in the image processing domain. IA has been implemented on a variety of parallel computers, with preprocessors available for the Ada and FORTRAN languages. An image algebra C++ class library has recently been made available. Thus, our algorithms are both feasible implementationally and portable to numerous machines. Analyses emphasize the aspects of image algebra that aid the design of multispectral vision algorithms, such as parameterized templates that facilitate the flexible specification of ATR filters.

  1. Automated detection of red lesions from digital colour fundus photographs.

    PubMed

    Jaafar, Hussain F; Nandi, Asoke K; Al-Nuaimy, Waleed

    2011-01-01

    Earliest signs of diabetic retinopathy, the major cause of vision loss, are damage to the blood vessels and the formation of lesions in the retina. Early detection of diabetic retinopathy is essential for the prevention of blindness. In this paper we present a computer-aided system to automatically identify red lesions from retinal fundus photographs. After pre-processing, a morphological technique was used to segment red lesion candidates from the background and other retinal structures. Then a rule-based classifier was used to discriminate actual red lesions from artifacts. A novel method for blood vessel detection is also proposed to refine the detection of red lesions. For a standarised test set of 219 images, the proposed method can detect red lesions with a sensitivity of 89.7% and a specificity of 98.6% (at lesion level). The performance of the proposed method shows considerable promise for detection of red lesions as well as other types of lesions.

  2. An investigation into multi-dimensional prediction models to estimate the pose error of a quadcopter in a CSP plant setting

    NASA Astrophysics Data System (ADS)

    Lock, Jacobus C.; Smit, Willie J.; Treurnicht, Johann

    2016-05-01

    The Solar Thermal Energy Research Group (STERG) is investigating ways to make heliostats cheaper to reduce the total cost of a concentrating solar power (CSP) plant. One avenue of research is to use unmanned aerial vehicles (UAVs) to automate and assist with the heliostat calibration process. To do this, the pose estimation error of each UAV must be determined and integrated into a calibration procedure. A computer vision (CV) system is used to measure the pose of a quadcopter UAV. However, this CV system contains considerable measurement errors. Since this is a high-dimensional problem, a sophisticated prediction model must be used to estimate the measurement error of the CV system for any given pose measurement vector. This paper attempts to train and validate such a model with the aim of using it to determine the pose error of a quadcopter in a CSP plant setting.

  3. An Exploration of the Emission Properties of X-Ray Bright Points Seen with SDO

    NASA Technical Reports Server (NTRS)

    Saar, S. H.; Elsden, T.; Muglach, K.

    2012-01-01

    We present preliminary results of a study of X-ray Bright Point (XBP) EUV emission and its dependence on other properties. The XBPs were located using a new, automated XBP finder for AlA developed as part of the Feature Finding Team for SDO Computer Vision. We analyze XBPs near disk center, comparing AlA EUV fluxes, HMI LOS magnetic fields, and photospheric flow fields (derived from HMI data) to look for relationships between XBP emission, magnetic flux, velocity fields, and XBP local environment. We find some evidence for differences in the mean XBP temperature with environment. Unsigned magnetic flux is correlated with XBP emission, though other parameters play a role. The majority of XBP footpoints are approaching each other, though at a slight angle from head-on on average. We discuss the results in the context of XBP heating.

  4. Microfluidic microscopy-assisted label-free approach for cancer screening: automated microfluidic cytology for cancer screening.

    PubMed

    Jagannadh, Veerendra Kalyan; Gopakumar, G; Subrahmanyam, Gorthi R K Sai; Gorthi, Sai Siva

    2017-05-01

    Each year, about 7-8 million deaths occur due to cancer around the world. More than half of the cancer-related deaths occur in the less-developed parts of the world. Cancer mortality rate can be reduced with early detection and subsequent treatment of the disease. In this paper, we introduce a microfluidic microscopy-based cost-effective and label-free approach for identification of cancerous cells. We outline a diagnostic framework for the same and detail an instrumentation layout. We have employed classical computer vision techniques such as 2D principal component analysis-based cell type representation followed by support vector machine-based classification. Analogous to criminal face recognition systems implemented with help of surveillance cameras, a signature-based approach for cancerous cell identification using microfluidic microscopy surveillance is demonstrated. Such a platform would facilitate affordable mass screening camps in the developing countries and therefore help decrease cancer mortality rate.

  5. Real-time biscuit tile image segmentation method based on edge detection.

    PubMed

    Matić, Tomislav; Aleksi, Ivan; Hocenski, Željko; Kraus, Dieter

    2018-05-01

    In this paper we propose a novel real-time Biscuit Tile Segmentation (BTS) method for images from ceramic tile production line. BTS method is based on signal change detection and contour tracing with a main goal of separating tile pixels from background in images captured on the production line. Usually, human operators are visually inspecting and classifying produced ceramic tiles. Computer vision and image processing techniques can automate visual inspection process if they fulfill real-time requirements. Important step in this process is a real-time tile pixels segmentation. BTS method is implemented for parallel execution on a GPU device to satisfy the real-time constraints of tile production line. BTS method outperforms 2D threshold-based methods, 1D edge detection methods and contour-based methods. Proposed BTS method is in use in the biscuit tile production line. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Human-Automation Collaboration: Support for Lunar and Planetary Exploration

    DTIC Science & Technology

    2007-02-01

    example, providing thermal regulation, but they limit mobility and senses, e.g., sound, vision, smell and touch. In addition, there is a limited supply...planning capabilities for the MER. Scientists and engineers have access to imagery that includes panoramas , camera views, and hyperspectral

  7. Educational Leadership in the New Millennium: A Vision for 2020.

    ERIC Educational Resources Information Center

    Miller, Thomas W.; Miller, Jean M.

    2001-01-01

    In assessing educational leadership dimensions for 2020, one must consider the effects of technology and automation in education, important influences of changes in society, and progress in developing and understanding administrators' personal/professional needs. Conscience, creative imagination, and interpersonal skill development are essential…

  8. A Starter's Guide to Artificial Intelligence.

    ERIC Educational Resources Information Center

    McConnell, Barry A.; McConnell, Nancy J.

    1988-01-01

    Discussion of the history and development of artificial intelligence (AI) highlights a bibliography of introductory books on various aspects of AI, including AI programing; problem solving; automated reasoning; game playing; natural language; expert systems; machine learning; robotics and vision; critics of AI; and representative software. (LRW)

  9. Automated face detection for occurrence and occupancy estimation in chimpanzees.

    PubMed

    Crunchant, Anne-Sophie; Egerer, Monika; Loos, Alexander; Burghardt, Tilo; Zuberbühler, Klaus; Corogenes, Katherine; Leinert, Vera; Kulik, Lars; Kühl, Hjalmar S

    2017-03-01

    Surveying endangered species is necessary to evaluate conservation effectiveness. Camera trapping and biometric computer vision are recent technological advances. They have impacted on the methods applicable to field surveys and these methods have gained significant momentum over the last decade. Yet, most researchers inspect footage manually and few studies have used automated semantic processing of video trap data from the field. The particular aim of this study is to evaluate methods that incorporate automated face detection technology as an aid to estimate site use of two chimpanzee communities based on camera trapping. As a comparative baseline we employ traditional manual inspection of footage. Our analysis focuses specifically on the basic parameter of occurrence where we assess the performance and practical value of chimpanzee face detection software. We found that the semi-automated data processing required only 2-4% of the time compared to the purely manual analysis. This is a non-negligible increase in efficiency that is critical when assessing the feasibility of camera trap occupancy surveys. Our evaluations suggest that our methodology estimates the proportion of sites used relatively reliably. Chimpanzees are mostly detected when they are present and when videos are filmed in high-resolution: the highest recall rate was 77%, for a false alarm rate of 2.8% for videos containing only chimpanzee frontal face views. Certainly, our study is only a first step for transferring face detection software from the lab into field application. Our results are promising and indicate that the current limitation of detecting chimpanzees in camera trap footage due to lack of suitable face views can be easily overcome on the level of field data collection, that is, by the combined placement of multiple high-resolution cameras facing reverse directions. This will enable to routinely conduct chimpanzee occupancy surveys based on camera trapping and semi-automated processing of footage. Using semi-automated ape face detection technology for processing camera trap footage requires only 2-4% of the time compared to manual analysis and allows to estimate site use by chimpanzees relatively reliably. © 2017 Wiley Periodicals, Inc.

  10. Milestones on the road to independence for the blind

    NASA Astrophysics Data System (ADS)

    Reed, Kenneth

    1997-02-01

    Ken will talk about his experiences as an end user of technology. Even moderate technological progress in the field of pattern recognition and artificial intelligence can be, often surprisingly, of great help to the blind. An example is the providing of portable bar code scanners so that a blind person knows what he is buying and what color it is. In this age of microprocessors controlling everything, how can a blind person find out what his VCR is doing? Is there some technique that will allow a blind musician to convert print music into midi files to drive a synthesizer? Can computer vision help the blind cross a road including predictions of where oncoming traffic will be located? Can computer vision technology provide spoken description of scenes so a blind person can figure out where doors and entrances are located, and what the signage on the building says? He asks 'can computer vision help me flip a pancake?' His challenge to those in the computer vision field is 'where can we go from here?'

  11. A large-scale solar dynamics observatory image dataset for computer vision applications.

    PubMed

    Kucuk, Ahmet; Banda, Juan M; Angryk, Rafal A

    2017-01-01

    The National Aeronautics Space Agency (NASA) Solar Dynamics Observatory (SDO) mission has given us unprecedented insight into the Sun's activity. By capturing approximately 70,000 images a day, this mission has created one of the richest and biggest repositories of solar image data available to mankind. With such massive amounts of information, researchers have been able to produce great advances in detecting solar events. In this resource, we compile SDO solar data into a single repository in order to provide the computer vision community with a standardized and curated large-scale dataset of several hundred thousand solar events found on high resolution solar images. This publicly available resource, along with the generation source code, will accelerate computer vision research on NASA's solar image data by reducing the amount of time spent performing data acquisition and curation from the multiple sources we have compiled. By improving the quality of the data with thorough curation, we anticipate a wider adoption and interest from the computer vision to the solar physics community.

  12. Automated validation of a computer operating system

    NASA Technical Reports Server (NTRS)

    Dervage, M. M.; Milberg, B. A.

    1970-01-01

    Programs apply selected input/output loads to complex computer operating system and measure performance of that system under such loads. Technique lends itself to checkout of computer software designed to monitor automated complex industrial systems.

  13. A Performance Comparison of Color Vision Tests for Military Screening.

    PubMed

    Walsh, David V; Robinson, James; Jurek, Gina M; Capó-Aponte, José E; Riggs, Daniel W; Temme, Leonard A

    2016-04-01

    Current color vision (CV) tests used for aviation screening in the U.S. Army only provide pass-fail results, and previous studies have shown variable sensitivity and specificity. The purpose of this study was to evaluate seven CV tests to determine an optimal CV test screener that potentially could be implemented by the U.S. Army. There were 133 subjects [65 Color Vision Deficits (CVD), 68 Color Vision Normal (CVN)] who performed all of the tests in one setting. CVD and CVN determination was initially assessed with the Oculus anomaloscope. Each test was administered monocularly and according to the test protocol. The main outcome measures were test sensitivity, specificity, and administration time (automated tests). Three of the four Pseudoisochromatic Plate (PIP) tests had a sensitivity/specificity > 0.90 OD/OS, whereas the FALANT tests had a sensitivity/specificity > 0.80 OD/OS. The Cone Contrast Test (CCT) demonstrated sensitivity/specificity > 0.90 OD/OS, whereas the Color Assessment and Diagnosis (CAD) test demonstrated sensitivity/specificity > 0.85 OD/OS. Comparison with the anomaloscope ("gold standard") revealed no significant difference of sensitivity and specificity OD/OS with the CCT, Dvorine PIP, and PIPC tests. Finally, the CCT administration time was significantly faster than the CAD test. The current U.S. Army CV screening tests demonstrated good sensitivity and specificity, as did the automated tests. In addition, some current PIP tests (Dvorine, PIPC), and the CCT performed no worse statistically than the anomaloscope with regard to sensitivity/specificity. The CCT letter presentation is randomized and results would not be confounded by potential memorization, or fading, of book plates.

  14. [Computer eyeglasses--aspects of a confusing topic].

    PubMed

    Huber-Spitzy, V; Janeba, E

    1997-01-01

    With the coming into force of the new Austrian Employee Protection Act the issue of the so called "computer glasses" will also gain added importance in our country. Such glasses have been defined as vision aids to be exclusively used for the work on computer monitors and include single-vision glasses solely intended for reading computer screen, glasses with bifocal lenses for reading computer screen and hard-copy documents as well as those with varifocal lenses featuring a thickened central section. There is still a considerable controversy among those concerned as to who will bear the costs for such glasses--most likely it will be the employer. Prescription of such vision aids will be exclusively restricted to ophthalmologists, based on a thorough ophthalmological examination under adequate consideration of the specific working environment and the workplace requirements of the individual employee concerned.

  15. Mining textural knowledge in biological images: Applications, methods and trends.

    PubMed

    Di Cataldo, Santa; Ficarra, Elisa

    2017-01-01

    Texture analysis is a major task in many areas of computer vision and pattern recognition, including biological imaging. Indeed, visual textures can be exploited to distinguish specific tissues or cells in a biological sample, to highlight chemical reactions between molecules, as well as to detect subcellular patterns that can be evidence of certain pathologies. This makes automated texture analysis fundamental in many applications of biomedicine, such as the accurate detection and grading of multiple types of cancer, the differential diagnosis of autoimmune diseases, or the study of physiological processes. Due to their specific characteristics and challenges, the design of texture analysis systems for biological images has attracted ever-growing attention in the last few years. In this paper, we perform a critical review of this important topic. First, we provide a general definition of texture analysis and discuss its role in the context of bioimaging, with examples of applications from the recent literature. Then, we review the main approaches to automated texture analysis, with special attention to the methods of feature extraction and encoding that can be successfully applied to microscopy images of cells or tissues. Our aim is to provide an overview of the state of the art, as well as a glimpse into the latest and future trends of research in this area.

  16. Texture-Based Automated Lithological Classification Using Aeromagenetic Anomaly Images

    USGS Publications Warehouse

    Shankar, Vivek

    2009-01-01

    This report consists of a thesis submitted to the faculty of the Department of Electrical and Computer Engineering, in partial fulfillment of the requirements for the degree of Master of Science, Graduate College, The University of Arizona, 2004 Aeromagnetic anomaly images are geophysical prospecting tools frequently used in the exploration of metalliferous minerals and hydrocarbons. The amplitude and texture content of these images provide a wealth of information to geophysicists who attempt to delineate the nature of the Earth's upper crust. These images prove to be extremely useful in remote areas and locations where the minerals of interest are concealed by basin fill. Typically, geophysicists compile a suite of aeromagnetic anomaly images, derived from amplitude and texture measurement operations, in order to obtain a qualitative interpretation of the lithological (rock) structure. Texture measures have proven to be especially capable of capturing the magnetic anomaly signature of unique lithological units. We performed a quantitative study to explore the possibility of using texture measures as input to a machine vision system in order to achieve automated classification of lithological units. This work demonstrated a significant improvement in classification accuracy over random guessing based on a priori probabilities. Additionally, a quantitative comparison between the performances of five classes of texture measures in their ability to discriminate lithological units was achieved.

  17. Computer Vision for High-Throughput Quantitative Phenotyping: A Case Study of Grapevine Downy Mildew Sporulation and Leaf Trichomes.

    PubMed

    Divilov, Konstantin; Wiesner-Hanks, Tyr; Barba, Paola; Cadle-Davidson, Lance; Reisch, Bruce I

    2017-12-01

    Quantitative phenotyping of downy mildew sporulation is frequently used in plant breeding and genetic studies, as well as in studies focused on pathogen biology such as chemical efficacy trials. In these scenarios, phenotyping a large number of genotypes or treatments can be advantageous but is often limited by time and cost. We present a novel computational pipeline dedicated to estimating the percent area of downy mildew sporulation from images of inoculated grapevine leaf discs in a manner that is time and cost efficient. The pipeline was tested on images from leaf disc assay experiments involving two F 1 grapevine families, one that had glabrous leaves (Vitis rupestris B38 × 'Horizon' [RH]) and another that had leaf trichomes (Horizon × V. cinerea B9 [HC]). Correlations between computer vision and manual visual ratings reached 0.89 in the RH family and 0.43 in the HC family. Additionally, we were able to use the computer vision system prior to sporulation to measure the percent leaf trichome area. We estimate that an experienced rater scoring sporulation would spend at least 90% less time using the computer vision system compared with the manual visual method. This will allow more treatments to be phenotyped in order to better understand the genetic architecture of downy mildew resistance and of leaf trichome density. We anticipate that this computer vision system will find applications in other pathosystems or traits where responses can be imaged with sufficient contrast from the background.

  18. Detection and Tracking of Moving Objects with Real-Time Onboard Vision System

    NASA Astrophysics Data System (ADS)

    Erokhin, D. Y.; Feldman, A. B.; Korepanov, S. E.

    2017-05-01

    Detection of moving objects in video sequence received from moving video sensor is a one of the most important problem in computer vision. The main purpose of this work is developing set of algorithms, which can detect and track moving objects in real time computer vision system. This set includes three main parts: the algorithm for estimation and compensation of geometric transformations of images, an algorithm for detection of moving objects, an algorithm to tracking of the detected objects and prediction their position. The results can be claimed to create onboard vision systems of aircraft, including those relating to small and unmanned aircraft.

  19. Computer graphics testbed to simulate and test vision systems for space applications

    NASA Technical Reports Server (NTRS)

    Cheatham, John B.

    1991-01-01

    Research activity has shifted from computer graphics and vision systems to the broader scope of applying concepts of artificial intelligence to robotics. Specifically, the research is directed toward developing Artificial Neural Networks, Expert Systems, and Laser Imaging Techniques for Autonomous Space Robots.

  20. Computer Vision for the Solar Dynamics Observatory (SDO)

    NASA Astrophysics Data System (ADS)

    Martens, P. C. H.; Attrill, G. D. R.; Davey, A. R.; Engell, A.; Farid, S.; Grigis, P. C.; Kasper, J.; Korreck, K.; Saar, S. H.; Savcheva, A.; Su, Y.; Testa, P.; Wills-Davey, M.; Bernasconi, P. N.; Raouafi, N.-E.; Delouille, V. A.; Hochedez, J. F.; Cirtain, J. W.; Deforest, C. E.; Angryk, R. A.; de Moortel, I.; Wiegelmann, T.; Georgoulis, M. K.; McAteer, R. T. J.; Timmons, R. P.

    2012-01-01

    In Fall 2008 NASA selected a large international consortium to produce a comprehensive automated feature-recognition system for the Solar Dynamics Observatory (SDO). The SDO data that we consider are all of the Atmospheric Imaging Assembly (AIA) images plus surface magnetic-field images from the Helioseismic and Magnetic Imager (HMI). We produce robust, very efficient, professionally coded software modules that can keep up with the SDO data stream and detect, trace, and analyze numerous phenomena, including flares, sigmoids, filaments, coronal dimmings, polarity inversion lines, sunspots, X-ray bright points, active regions, coronal holes, EIT waves, coronal mass ejections (CMEs), coronal oscillations, and jets. We also track the emergence and evolution of magnetic elements down to the smallest detectable features and will provide at least four full-disk, nonlinear, force-free magnetic field extrapolations per day. The detection of CMEs and filaments is accomplished with Solar and Heliospheric Observatory (SOHO)/ Large Angle and Spectrometric Coronagraph (LASCO) and ground-based Hα data, respectively. A completely new software element is a trainable feature-detection module based on a generalized image-classification algorithm. Such a trainable module can be used to find features that have not yet been discovered (as, for example, sigmoids were in the pre- Yohkoh era). Our codes will produce entries in the Heliophysics Events Knowledgebase (HEK) as well as produce complete catalogs for results that are too numerous for inclusion in the HEK, such as the X-ray bright-point metadata. This will permit users to locate data on individual events as well as carry out statistical studies on large numbers of events, using the interface provided by the Virtual Solar Observatory. The operations concept for our computer vision system is that the data will be analyzed in near real time as soon as they arrive at the SDO Joint Science Operations Center and have undergone basic processing. This will allow the system to produce timely space-weather alerts and to guide the selection and production of quicklook images and movies, in addition to its prime mission of enabling solar science. We briefly describe the complex and unique data-processing pipeline, consisting of the hardware and control software required to handle the SDO data stream and accommodate the computer-vision modules, which has been set up at the Lockheed-Martin Space Astrophysics Laboratory (LMSAL), with an identical copy at the Smithsonian Astrophysical Observatory (SAO).

  1. Computer vision-based evaluation of pre- and postrigor changes in size and shape of Atlantic cod (Gadus morhua) and Atlantic salmon (Salmo salar) fillets during rigor mortis and ice storage: effects of perimortem handling stress.

    PubMed

    Misimi, E; Erikson, U; Digre, H; Skavhaug, A; Mathiassen, J R

    2008-03-01

    The present study describes the possibilities for using computer vision-based methods for the detection and monitoring of transient 2D and 3D changes in the geometry of a given product. The rigor contractions of unstressed and stressed fillets of Atlantic salmon (Salmo salar) and Atlantic cod (Gadus morhua) were used as a model system. Gradual changes in fillet shape and size (area, length, width, and roundness) were recorded for 7 and 3 d, respectively. Also, changes in fillet area and height (cross-section profiles) were tracked using a laser beam and a 3D digital camera. Another goal was to compare rigor developments of the 2 species of farmed fish, and whether perimortem stress affected the appearance of the fillets. Some significant changes in fillet size and shape were found (length, width, area, roundness, height) between unstressed and stressed fish during the course of rigor mortis as well as after ice storage (postrigor). However, the observed irreversible stress-related changes were small and would hardly mean anything for postrigor fish processors or consumers. The cod were less stressed (as defined by muscle biochemistry) than the salmon after the 2 species had been subjected to similar stress bouts. Consequently, the difference between the rigor courses of unstressed and stressed fish was more extreme in the case of salmon. However, the maximal whole fish rigor strength was judged to be about the same for both species. Moreover, the reductions in fillet area and length, as well as the increases in width, were basically of similar magnitude for both species. In fact, the increases in fillet roundness and cross-section height were larger for the cod. We conclude that the computer vision method can be used effectively for automated monitoring of changes in 2D and 3D shape and size of fish fillets during rigor mortis and ice storage. In addition, it can be used for grading of fillets according to uniformity in size and shape, as well as measurement of fillet yield measured in thickness. The methods are accurate, rapid, nondestructive, and contact-free and can therefore be regarded as suitable for industrial purposes.

  2. Computer Vision Photogrammetry for Underwater Archaeological Site Recording in a Low-Visibility Environment

    NASA Astrophysics Data System (ADS)

    Van Damme, T.

    2015-04-01

    Computer Vision Photogrammetry allows archaeologists to accurately record underwater sites in three dimensions using simple twodimensional picture or video sequences, automatically processed in dedicated software. In this article, I share my experience in working with one such software package, namely PhotoScan, to record a Dutch shipwreck site. In order to demonstrate the method's reliability and flexibility, the site in question is reconstructed from simple GoPro footage, captured in low-visibility conditions. Based on the results of this case study, Computer Vision Photogrammetry compares very favourably to manual recording methods both in recording efficiency, and in the quality of the final results. In a final section, the significance of Computer Vision Photogrammetry is then assessed from a historical perspective, by placing the current research in the wider context of about half a century of successful use of Analytical and later Digital photogrammetry in the field of underwater archaeology. I conclude that while photogrammetry has been used in our discipline for several decades now, for various reasons the method was only ever used by a relatively small percentage of projects. This is likely to change in the near future since, compared to the `traditional' photogrammetry approaches employed in the past, today Computer Vision Photogrammetry is easier to use, more reliable and more affordable than ever before, while at the same time producing more accurate and more detailed three-dimensional results.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jing, E-mail: jing.zhang2@duke.edu; Ghate, Sujata V.; Yoon, Sora C.

    Purpose: Mammography is the most widely accepted and utilized screening modality for early breast cancer detection. Providing high quality mammography education to radiology trainees is essential, since excellent interpretation skills are needed to ensure the highest benefit of screening mammography for patients. The authors have previously proposed a computer-aided education system based on trainee models. Those models relate human-assessed image characteristics to trainee error. In this study, the authors propose to build trainee models that utilize features automatically extracted from images using computer vision algorithms to predict likelihood of missing each mass by the trainee. This computer vision-based approach tomore » trainee modeling will allow for automatically searching large databases of mammograms in order to identify challenging cases for each trainee. Methods: The authors’ algorithm for predicting the likelihood of missing a mass consists of three steps. First, a mammogram is segmented into air, pectoral muscle, fatty tissue, dense tissue, and mass using automated segmentation algorithms. Second, 43 features are extracted using computer vision algorithms for each abnormality identified by experts. Third, error-making models (classifiers) are applied to predict the likelihood of trainees missing the abnormality based on the extracted features. The models are developed individually for each trainee using his/her previous reading data. The authors evaluated the predictive performance of the proposed algorithm using data from a reader study in which 10 subjects (7 residents and 3 novices) and 3 experts read 100 mammographic cases. Receiver operating characteristic (ROC) methodology was applied for the evaluation. Results: The average area under the ROC curve (AUC) of the error-making models for the task of predicting which masses will be detected and which will be missed was 0.607 (95% CI,0.564-0.650). This value was statistically significantly different from 0.5 (p < 0.0001). For the 7 residents only, the AUC performance of the models was 0.590 (95% CI,0.537-0.642) and was also significantly higher than 0.5 (p = 0.0009). Therefore, generally the authors’ models were able to predict which masses were detected and which were missed better than chance. Conclusions: The authors proposed an algorithm that was able to predict which masses will be detected and which will be missed by each individual trainee. This confirms existence of error-making patterns in the detection of masses among radiology trainees. Furthermore, the proposed methodology will allow for the optimized selection of difficult cases for the trainees in an automatic and efficient manner.« less

  4. A human visual based binarization technique for histological images

    NASA Astrophysics Data System (ADS)

    Shreyas, Kamath K. M.; Rajendran, Rahul; Panetta, Karen; Agaian, Sos

    2017-05-01

    In the field of vision-based systems for object detection and classification, thresholding is a key pre-processing step. Thresholding is a well-known technique for image segmentation. Segmentation of medical images, such as Computed Axial Tomography (CAT), Magnetic Resonance Imaging (MRI), X-Ray, Phase Contrast Microscopy, and Histological images, present problems like high variability in terms of the human anatomy and variation in modalities. Recent advances made in computer-aided diagnosis of histological images help facilitate detection and classification of diseases. Since most pathology diagnosis depends on the expertise and ability of the pathologist, there is clearly a need for an automated assessment system. Histological images are stained to a specific color to differentiate each component in the tissue. Segmentation and analysis of such images is problematic, as they present high variability in terms of color and cell clusters. This paper presents an adaptive thresholding technique that aims at segmenting cell structures from Haematoxylin and Eosin stained images. The thresholded result can further be used by pathologists to perform effective diagnosis. The effectiveness of the proposed method is analyzed by visually comparing the results to the state of art thresholding methods such as Otsu, Niblack, Sauvola, Bernsen, and Wolf. Computer simulations demonstrate the efficiency of the proposed method in segmenting critical information.

  5. In-camera video-stream processing for bandwidth reduction in web inspection

    NASA Astrophysics Data System (ADS)

    Jullien, Graham A.; Li, QiuPing; Hajimowlana, S. Hossain; Morvay, J.; Conflitti, D.; Roberts, James W.; Doody, Brian C.

    1996-02-01

    Automated machine vision systems are now widely used for industrial inspection tasks where video-stream data information is taken in by the camera and then sent out to the inspection system for future processing. In this paper we describe a prototype system for on-line programming of arbitrary real-time video data stream bandwidth reduction algorithms; the output of the camera only contains information that has to be further processed by a host computer. The processing system is built into a DALSA CCD camera and uses a microcontroller interface to download bit-stream data to a XILINXTM FPGA. The FPGA is directly connected to the video data-stream and outputs data to a low bandwidth output bus. The camera communicates to a host computer via an RS-232 link to the microcontroller. Static memory is used to both generate a FIFO interface for buffering defect burst data, and for off-line examination of defect detection data. In addition to providing arbitrary FPGA architectures, the internal program of the microcontroller can also be changed via the host computer and a ROM monitor. This paper describes a prototype system board, mounted inside a DALSA camera, and discusses some of the algorithms currently being implemented for web inspection applications.

  6. Dynamic Estimation of Rigid Motion from Perspective Views via Recursive Identification of Exterior Differential Systems with Parameters on a Topological Manifold

    DTIC Science & Technology

    1994-02-15

    0. Faugeras. Three dimensional vision, a geometric viewpoint. MIT Press, 1993. [19] 0 . D. Faugeras and S. Maybank . Motion from point mathces...multiplicity of solutions. Int. J. of Computer Vision, 1990. 1201 0.D. Faugeras, Q.T. Luong, and S.J. Maybank . Camera self-calibration: theory and...Kalrnan filter-based algorithms for estimating depth from image sequences. Int. J. of computer vision, 1989. [41] S. Maybank . Theory of

  7. Computational Vision: A Critical Review

    DTIC Science & Technology

    1989-10-01

    Optic News, 15:9-25, 1989. [8] H. B . Barlow and R. W. Levick . The mechanism of directional selectivity in the rabbit’s retina. J. Physiol., 173:477...comparison, other formulations, e.g., [64], used 16 @V A \\E(t=t2) (a) \\ E(t-tl) ( b ) Figure 7: An illustration of the aperture problem. Left: a bar E is...Ballard and C. M. Brown. Computer Vision. Prentice-Hall, Englewood Cliffs, NJ, 1982. [7] D. H. Ballard, R. C. Nelson, and B . Yamauchi. Animate vision

  8. Roi Detection and Vessel Segmentation in Retinal Image

    NASA Astrophysics Data System (ADS)

    Sabaz, F.; Atila, U.

    2017-11-01

    Diabetes disrupts work by affecting the structure of the eye and afterwards leads to loss of vision. Depending on the stage of disease that called diabetic retinopathy, there are sudden loss of vision and blurred vision problems. Automated detection of vessels in retinal images is a useful study to diagnose eye diseases, disease classification and other clinical trials. The shape and structure of the vessels give information about the severity of the disease and the stage of the disease. Automatic and fast detection of vessels allows for a quick diagnosis of the disease and the treatment process to start shortly. ROI detection and vessel extraction methods for retinal image are mentioned in this study. It is shown that the Frangi filter used in image processing can be successfully used in detection and extraction of vessels.

  9. The Effect of Computer Automation on Institutional Review Board (IRB) Office Efficiency

    ERIC Educational Resources Information Center

    Oder, Karl; Pittman, Stephanie

    2015-01-01

    Companies purchase computer systems to make their processes more efficient through automation. Some academic medical centers (AMC) have purchased computer systems for their institutional review boards (IRB) to increase efficiency and compliance with regulations. IRB computer systems are expensive to purchase, deploy, and maintain. An AMC should…

  10. Engineering the smart factory

    NASA Astrophysics Data System (ADS)

    Harrison, Robert; Vera, Daniel; Ahmad, Bilal

    2016-10-01

    The fourth industrial revolution promises to create what has been called the smart factory. The vision is that within such modular structured smart factories, cyber-physical systems monitor physical processes, create a virtual copy of the physical world and make decentralised decisions. This paper provides a view of this initiative from an automation systems perspective. In this context it considers how future automation systems might be effectively configured and supported through their lifecycles and how integration, application modelling, visualisation and reuse of such systems might be best achieved. The paper briefly describes limitations in current engineering methods, and new emerging approaches including the cyber physical systems (CPS) engineering tools being developed by the automation systems group (ASG) at Warwick Manufacturing Group, University of Warwick, UK.

  11. Automation, consolidation, and integration in autoimmune diagnostics.

    PubMed

    Tozzoli, Renato; D'Aurizio, Federica; Villalta, Danilo; Bizzaro, Nicola

    2015-08-01

    Over the past two decades, we have witnessed an extraordinary change in autoimmune diagnostics, characterized by the progressive evolution of analytical technologies, the availability of new tests, and the explosive growth of molecular biology and proteomics. Aside from these huge improvements, organizational changes have also occurred which brought about a more modern vision of the autoimmune laboratory. The introduction of automation (for harmonization of testing, reduction of human error, reduction of handling steps, increase of productivity, decrease of turnaround time, improvement of safety), consolidation (combining different analytical technologies or strategies on one instrument or on one group of connected instruments) and integration (linking analytical instruments or group of instruments with pre- and post-analytical devices) opened a new era in immunodiagnostics. In this article, we review the most important changes that have occurred in autoimmune diagnostics and present some models related to the introduction of automation in the autoimmunology laboratory, such as automated indirect immunofluorescence and changes in the two-step strategy for detection of autoantibodies; automated monoplex immunoassays and reduction of turnaround time; and automated multiplex immunoassays for autoantibody profiling.

  12. AstroCV: Astronomy computer vision library

    NASA Astrophysics Data System (ADS)

    González, Roberto E.; Muñoz, Roberto P.; Hernández, Cristian A.

    2018-04-01

    AstroCV processes and analyzes big astronomical datasets, and is intended to provide a community repository of high performance Python and C++ algorithms used for image processing and computer vision. The library offers methods for object recognition, segmentation and classification, with emphasis in the automatic detection and classification of galaxies.

  13. A strongly goal-directed close-range vision system for spacecraft docking

    NASA Technical Reports Server (NTRS)

    Boyer, Kim L.; Goddard, Ralph E.

    1991-01-01

    In this presentation, we will propose a strongly goal-oriented stereo vision system to establish proper docking approach motions for automated rendezvous and capture (AR&C). From an input sequence of stereo video image pairs, the system produces a current best estimate of: contact position; contact vector; contact velocity; and contact orientation. The processing demands imposed by this particular problem and its environment dictate a special case solution; such a system should necessarily be, in some sense, minimalist. By this we mean the system should construct a scene description just sufficiently rich to solve the problem at hand and should do no more processing than is absolutely necessary. In addition, the imaging resolution should be just sufficient. Extracting additional information and constructing higher level scene representations wastes energy and computational resources and injects an unnecessary degree of complexity, increasing the likelihood of malfunction. We therefore take a departure from most prior stereopsis work, including our own, and propose a system based on associative memory. The purpose of the memory is to immediately associate a set of motor commands with a set of input visual patterns in the two cameras. That is, rather than explicitly computing point correspondences and object positions in world coordinates and trying to reason forward from this information to a plan of action, we are trying to capture the essence of reflex behavior through the action of associative memory. The explicit construction of point correspondences and 3D scene descriptions, followed by online velocity and point of impact calculations, is prohibitively expensive from a computational point of view for the problem at hand. Learned patterns on the four image planes, left and right at two discrete but closely spaced instants in time, will be bused directly to infer the spacecraft reaction. This will be a continuing online process as the docking collar approaches.

  14. Comparing humans and deep learning performance for grading AMD: A study in using universal deep features and transfer learning for automated AMD analysis.

    PubMed

    Burlina, Philippe; Pacheco, Katia D; Joshi, Neil; Freund, David E; Bressler, Neil M

    2017-03-01

    When left untreated, age-related macular degeneration (AMD) is the leading cause of vision loss in people over fifty in the US. Currently it is estimated that about eight million US individuals have the intermediate stage of AMD that is often asymptomatic with regard to visual deficit. These individuals are at high risk for progressing to the advanced stage where the often treatable choroidal neovascular form of AMD can occur. Careful monitoring to detect the onset and prompt treatment of the neovascular form as well as dietary supplementation can reduce the risk of vision loss from AMD, therefore, preferred practice patterns recommend identifying individuals with the intermediate stage in a timely manner. Past automated retinal image analysis (ARIA) methods applied on fundus imagery have relied on engineered and hand-designed visual features. We instead detail the novel application of a machine learning approach using deep learning for the problem of ARIA and AMD analysis. We use transfer learning and universal features derived from deep convolutional neural networks (DCNN). We address clinically relevant 4-class, 3-class, and 2-class AMD severity classification problems. Using 5664 color fundus images from the NIH AREDS dataset and DCNN universal features, we obtain values for accuracy for the (4-, 3-, 2-) class classification problem of (79.4%, 81.5%, 93.4%) for machine vs. (75.8%, 85.0%, 95.2%) for physician grading. This study demonstrates the efficacy of machine grading based on deep universal features/transfer learning when applied to ARIA and is a promising step in providing a pre-screener to identify individuals with intermediate AMD and also as a tool that can facilitate identifying such individuals for clinical studies aimed at developing improved therapies. It also demonstrates comparable performance between computer and physician grading. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Topographic Mapping of Residual Vision by Computer

    ERIC Educational Resources Information Center

    MacKeben, Manfred

    2008-01-01

    Many persons with low vision have diseases that damage the retina only in selected areas, which can lead to scotomas (blind spots) in perception. The most frequent of these diseases is age-related macular degeneration (AMD), in which foveal vision is often impaired by a central scotoma that impairs vision of fine detail and causes problems with…

  16. Machine vision system for automated detection of stained pistachio nuts

    NASA Astrophysics Data System (ADS)

    Pearson, Tom C.

    1995-01-01

    A machine vision system was developed to separate stained pistachio nuts, which comprise of about 5% of the California crop, from unstained nuts. The system may be used to reduce labor involved with manual grading or to remove aflatoxin contaminated product from low grade process streams. The system was tested on two different pistachio process streams: the bi- chromatic color sorter reject stream and the small nut shelling stock stream. The system had a minimum overall error rate of 14% for the bi-chromatic sorter reject stream and 15% for the small shelling stock stream.

  17. Computers Launch Faster, Better Job Matching

    ERIC Educational Resources Information Center

    Stevenson, Gloria

    1976-01-01

    Employment Security Automation Project (ESAP), a five-year program sponsored by the Employment and Training Administration, features an innovative computer-assisted job matching system and instantaneous computer-assisted service for unemployment insurance claimants. ESAP will also consolidate existing automated employment security systems to…

  18. The Effect of the Usage of Computer-Based Assistive Devices on the Functioning and Quality of Life of Individuals Who Are Blind or Have Low Vision

    ERIC Educational Resources Information Center

    Rosner, Yotam; Perlman, Amotz

    2018-01-01

    Introduction: The Israel Ministry of Social Affairs and Social Services subsidizes computer-based assistive devices for individuals with visual impairments (that is, those who are blind or have low vision) to assist these individuals in their interactions with computers and thus to enhance their independence and quality of life. The aim of this…

  19. Software for Real-Time Analysis of Subsonic Test Shot Accuracy

    DTIC Science & Technology

    2014-03-01

    used the C++ programming language, the Open Source Computer Vision ( OpenCV ®) software library, and Microsoft Windows® Application Programming...video for comparison through OpenCV image analysis tools. Based on the comparison, the software then computed the coordinates of each shot relative to...DWB researchers wanted to use the Open Source Computer Vision ( OpenCV ) software library for capturing and analyzing frames of video. OpenCV contains

  20. [Ophthalmologist and "computer vision syndrome"].

    PubMed

    Barar, A; Apatachioaie, Ioana Daniela; Apatachioaie, C; Marceanu-Brasov, L

    2007-01-01

    The authors had tried to collect the data available on the Internet about a subject that we consider as being totally ignored in the Romanian scientific literature and unexpectedly insufficiently treated in the specialized ophthalmologic literature. Known in the specialty literature under the generic name of "Computer vision syndrome", it is defined by the American Optometric Association as a complex of eye and vision problems related to the activities which stress the near vision and which are experienced in relation, or during, the use of the computer. During the consultations we hear frequent complaints of eye-strain - asthenopia, headaches, blurred distance and/or near vision, dry and irritated eyes, slow refocusing, neck and backache, photophobia, sensation of diplopia, light sensitivity, and double vision, but because of the lack of information, we overlooked them too easily, without going thoroughly into the real motives. In most of the developed countries, there are recommendations issued by renowned medical associations with regard to the definition, the diagnosis, and the methods for the prevention, treatment and periodical control of the symptoms found in computer users, in conjunction with an extremely detailed ergonomic legislation. We found out that these problems incite a much too low interest in our country. We would like to rouse the interest of our ophthalmologist colleagues in the understanding and the recognition of these symptoms and in their treatment, or at least their improvement, through specialized measures or through the cooperation with our specialist occupational medicine colleagues.

  1. Neighbourhood looking glass: 360º automated characterisation of the built environment for neighbourhood effects research.

    PubMed

    Nguyen, Quynh C; Sajjadi, Mehdi; McCullough, Matt; Pham, Minh; Nguyen, Thu T; Yu, Weijun; Meng, Hsien-Wen; Wen, Ming; Li, Feifei; Smith, Ken R; Brunisholz, Kim; Tasdizen, Tolga

    2018-03-01

    Neighbourhood quality has been connected with an array of health issues, but neighbourhood research has been limited by the lack of methods to characterise large geographical areas. This study uses innovative computer vision methods and a new big data source of street view images to automatically characterise neighbourhood built environments. A total of 430 000 images were obtained using Google's Street View Image API for Salt Lake City, Chicago and Charleston. Convolutional neural networks were used to create indicators of street greenness, crosswalks and building type. We implemented log Poisson regression models to estimate associations between built environment features and individual prevalence of obesity and diabetes in Salt Lake City, controlling for individual-level and zip code-level predisposing characteristics. Computer vision models had an accuracy of 86%-93% compared with manual annotations. Charleston had the highest percentage of green streets (79%), while Chicago had the highest percentage of crosswalks (23%) and commercial buildings/apartments (59%). Built environment characteristics were categorised into tertiles, with the highest tertile serving as the referent group. Individuals living in zip codes with the most green streets, crosswalks and commercial buildings/apartments had relative obesity prevalences that were 25%-28% lower and relative diabetes prevalences that were 12%-18% lower than individuals living in zip codes with the least abundance of these neighbourhood features. Neighbourhood conditions may influence chronic disease outcomes. Google Street View images represent an underused data resource for the construction of built environment features. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  2. Metal surface corrosion grade estimation from single image

    NASA Astrophysics Data System (ADS)

    Chen, Yijun; Qi, Lin; Sun, Huyuan; Fan, Hao; Dong, Junyu

    2018-04-01

    Metal corrosion can cause many problems, how to quickly and effectively assess the grade of metal corrosion and timely remediation is a very important issue. Typically, this is done by trained surveyors at great cost. Assisting them in the inspection process by computer vision and artificial intelligence would decrease the inspection cost. In this paper, we propose a dataset of metal surface correction used for computer vision detection and present a comparison between standard computer vision techniques by using OpenCV and deep learning method for automatic metal surface corrosion grade estimation from single image on this dataset. The test has been performed by classifying images and calculating the accuracy for the two different approaches.

  3. Scene and human face recognition in the central vision of patients with glaucoma

    PubMed Central

    Aptel, Florent; Attye, Arnaud; Guyader, Nathalie; Boucart, Muriel; Chiquet, Christophe; Peyrin, Carole

    2018-01-01

    Primary open-angle glaucoma (POAG) firstly mainly affects peripheral vision. Current behavioral studies support the idea that visual defects of patients with POAG extend into parts of the central visual field classified as normal by static automated perimetry analysis. This is particularly true for visual tasks involving processes of a higher level than mere detection. The purpose of this study was to assess visual abilities of POAG patients in central vision. Patients were assigned to two groups following a visual field examination (Humphrey 24–2 SITA-Standard test). Patients with both peripheral and central defects and patients with peripheral but no central defect, as well as age-matched controls, participated in the experiment. All participants had to perform two visual tasks where low-contrast stimuli were presented in the central 6° of the visual field. A categorization task of scene images and human face images assessed high-level visual recognition abilities. In contrast, a detection task using the same stimuli assessed low-level visual function. The difference in performance between detection and categorization revealed the cost of high-level visual processing. Compared to controls, patients with a central visual defect showed a deficit in both detection and categorization of all low-contrast images. This is consistent with the abnormal retinal sensitivity as assessed by perimetry. However, the deficit was greater for categorization than detection. Patients without a central defect showed similar performances to the controls concerning the detection and categorization of faces. However, while the detection of scene images was well-maintained, these patients showed a deficit in their categorization. This suggests that the simple loss of peripheral vision could be detrimental to scene recognition, even when the information is displayed in central vision. This study revealed subtle defects in the central visual field of POAG patients that cannot be predicted by static automated perimetry assessment using Humphrey 24–2 SITA-Standard test. PMID:29481572

  4. Computing and Office Automation: Changing Variables.

    ERIC Educational Resources Information Center

    Staman, E. Michael

    1981-01-01

    Trends in computing and office automation and their applications, including planning, institutional research, and general administrative support in higher education, are discussed. Changing aspects of information processing and an increasingly larger user community are considered. The computing literacy cycle may involve programming, analysis, use…

  5. A comparison of symptoms after viewing text on a computer screen and hardcopy.

    PubMed

    Chu, Christina; Rosenfield, Mark; Portello, Joan K; Benzoni, Jaclyn A; Collier, Juanita D

    2011-01-01

    Computer vision syndrome (CVS) is a complex of eye and vision problems experienced during or related to computer use. Ocular symptoms may include asthenopia, accommodative and vergence difficulties and dry eye. CVS occurs in up to 90% of computer workers, and given the almost universal use of these devices, it is important to identify whether these symptoms are specific to computer operation, or are simply a manifestation of performing a sustained near-vision task. This study compared ocular symptoms immediately following a sustained near task. 30 young, visually-normal subjects read text aloud either from a desktop computer screen or a printed hardcopy page at a viewing distance of 50 cm for a continuous 20 min period. Identical text was used in the two sessions, which was matched for size and contrast. Target viewing angle and luminance were similar for the two conditions. Immediately following completion of the reading task, subjects completed a written questionnaire asking about their level of ocular discomfort during the task. When comparing the computer and hardcopy conditions, significant differences in median symptom scores were reported with regard to blurred vision during the task (t = 147.0; p = 0.03) and the mean symptom score (t = 102.5; p = 0.04). In both cases, symptoms were higher during computer use. Symptoms following sustained computer use were significantly worse than those reported after hard copy fixation under similar viewing conditions. A better understanding of the physiology underlying CVS is critical to allow more accurate diagnosis and treatment. This will allow practitioners to optimize visual comfort and efficiency during computer operation.

  6. ADP Analysis project for the Human Resources Management Division

    NASA Technical Reports Server (NTRS)

    Tureman, Robert L., Jr.

    1993-01-01

    The ADP (Automated Data Processing) Analysis Project was conducted for the Human Resources Management Division (HRMD) of NASA's Langley Research Center. The three major areas of work in the project were computer support, automated inventory analysis, and an ADP study for the Division. The goal of the computer support work was to determine automation needs of Division personnel and help them solve computing problems. The goal of automated inventory analysis was to find a way to analyze installed software and usage on a Macintosh. Finally, the ADP functional systems study for the Division was designed to assess future HRMD needs concerning ADP organization and activities.

  7. Industry's tireless eyes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1987-08-01

    This article reports that there are literally hundreds of machine vision systems from which to choose. They range in cost from $10,000 to $1,000,000. Most have been designed for specific applications; the same systems if used for a different application may fail dismally. How can you avoid wasting money on inferior, useless, or nonexpandable systems. A good reference is the Automated Vision Association in Ann Arbor, Mich., a trade group comprised of North American machine vision manufacturers. Reputable suppliers caution users to do their homework before making an investment. Important considerations include comprehensive details on the objects to be viewed-thatmore » is, quantity, shape, dimension, size, and configuration details; lighting characteristics and variations; component orientation details. Then, what do you expect the system to do-inspect, locate components, aid in robotic vision. Other criteria include system speed and related accuracy and reliability. What are the projected benefits and system paybacks.. Examine primarily paybacks associated with scrap and rework reduction as well as reduced warranty costs.« less

  8. Lumber Grading With A Computer Vision System

    Treesearch

    Richard W. Conners; Tai-Hoon Cho; Philip A. Araman

    1989-01-01

    Over the past few years significant progress has been made in developing a computer vision system for locating and identifying defects on surfaced hardwood lumber. Unfortunately, until September of 1988 little research had gone into developing methods for analyzing rough lumber. This task is arguably more complex than the analysis of surfaced lumber. The prime...

  9. Range Image Flow using High-Order Polynomial Expansion

    DTIC Science & Technology

    2013-09-01

    included as a default algorithm in the OpenCV library [2]. The research of estimating the motion between range images, or range flow, is much more...Journal of Computer Vision, vol. 92, no. 1, pp. 1‒31. 2. G. Bradski and A. Kaehler. 2008. Learning OpenCV : Computer Vision with the OpenCV Library

  10. Implementation of Automatic Focusing Algorithms for a Computer Vision System with Camera Control.

    DTIC Science & Technology

    1983-08-15

    obtainable from real data, rather than relying on a stock database. Often, computer vision and image processing algorithms become subconsciously tuned to...two coils on the same mount structure. Since it was not possible to reprogram the binary system, we turned to the POPEYE system for both its grey

  11. Quality Parameters of Six Cultivars of Blueberry Using Computer Vision

    PubMed Central

    Celis Cofré, Daniela; Silva, Patricia; Enrione, Javier; Osorio, Fernando

    2013-01-01

    Background. Blueberries are considered an important source of health benefits. This work studied six blueberry cultivars: “Duke,” “Brigitta”, “Elliott”, “Centurion”, “Star,” and “Jewel”, measuring quality parameters such as °Brix, pH, moisture content using standard techniques and shape, color, and fungal presence obtained by computer vision. The storage conditions were time (0–21 days), temperature (4 and 15°C), and relative humidity (75 and 90%). Results. Significant differences (P < 0.05) were detected between fresh cultivars in pH, °Brix, shape, and color. However, the main parameters which changed depending on storage conditions, increasing at higher temperature, were color (from blue to red) and fungal presence (from 0 to 15%), both detected using computer vision, which is important to determine a shelf life of 14 days for all cultivars. Similar behavior during storage was obtained for all cultivars. Conclusion. Computer vision proved to be a reliable and simple method to objectively determine blueberry decay during storage that can be used as an alternative approach to currently used subjective measurements. PMID:26904598

  12. Image processing and pattern recognition with CVIPtools MATLAB toolbox: automatic creation of masks for veterinary thermographic images

    NASA Astrophysics Data System (ADS)

    Mishra, Deependra K.; Umbaugh, Scott E.; Lama, Norsang; Dahal, Rohini; Marino, Dominic J.; Sackman, Joseph

    2016-09-01

    CVIPtools is a software package for the exploration of computer vision and image processing developed in the Computer Vision and Image Processing Laboratory at Southern Illinois University Edwardsville. CVIPtools is available in three variants - a) CVIPtools Graphical User Interface, b) CVIPtools C library and c) CVIPtools MATLAB toolbox, which makes it accessible to a variety of different users. It offers students, faculty, researchers and any user a free and easy way to explore computer vision and image processing techniques. Many functions have been implemented and are updated on a regular basis, the library has reached a level of sophistication that makes it suitable for both educational and research purposes. In this paper, the detail list of the functions available in the CVIPtools MATLAB toolbox are presented and how these functions can be used in image analysis and computer vision applications. The CVIPtools MATLAB toolbox allows the user to gain practical experience to better understand underlying theoretical problems in image processing and pattern recognition. As an example application, the algorithm for the automatic creation of masks for veterinary thermographic images is presented.

  13. Applications of Computer Vision for Assessing Quality of Agri-food Products: A Review of Recent Research Advances.

    PubMed

    Ma, Ji; Sun, Da-Wen; Qu, Jia-Huan; Liu, Dan; Pu, Hongbin; Gao, Wen-Hong; Zeng, Xin-An

    2016-01-01

    With consumer concerns increasing over food quality and safety, the food industry has begun to pay much more attention to the development of rapid and reliable food-evaluation systems over the years. As a result, there is a great need for manufacturers and retailers to operate effective real-time assessments for food quality and safety during food production and processing. Computer vision, comprising a nondestructive assessment approach, has the aptitude to estimate the characteristics of food products with its advantages of fast speed, ease of use, and minimal sample preparation. Specifically, computer vision systems are feasible for classifying food products into specific grades, detecting defects, and estimating properties such as color, shape, size, surface defects, and contamination. Therefore, in order to track the latest research developments of this technology in the agri-food industry, this review aims to present the fundamentals and instrumentation of computer vision systems with details of applications in quality assessment of agri-food products from 2007 to 2013 and also discuss its future trends in combination with spectroscopy.

  14. Automated computation of autonomous spectral submanifolds for nonlinear modal analysis

    NASA Astrophysics Data System (ADS)

    Ponsioen, Sten; Pedergnana, Tiemo; Haller, George

    2018-04-01

    We discuss an automated computational methodology for computing two-dimensional spectral submanifolds (SSMs) in autonomous nonlinear mechanical systems of arbitrary degrees of freedom. In our algorithm, SSMs, the smoothest nonlinear continuations of modal subspaces of the linearized system, are constructed up to arbitrary orders of accuracy, using the parameterization method. An advantage of this approach is that the construction of the SSMs does not break down when the SSM folds over its underlying spectral subspace. A further advantage is an automated a posteriori error estimation feature that enables a systematic increase in the orders of the SSM computation until the required accuracy is reached. We find that the present algorithm provides a major speed-up, relative to numerical continuation methods, in the computation of backbone curves, especially in higher-dimensional problems. We illustrate the accuracy and speed of the automated SSM algorithm on lower- and higher-dimensional mechanical systems.

  15. Computer vision syndrome: a study of the knowledge, attitudes and practices in Indian ophthalmologists.

    PubMed

    Bali, Jatinder; Navin, Neeraj; Thakur, Bali Renu

    2007-01-01

    To study the knowledge, attitude and practices (KAP) towards computer vision syndrome prevalent in Indian ophthalmologists and to assess whether 'computer use by practitioners' had any bearing on the knowledge and practices in computer vision syndrome (CVS). A random KAP survey was carried out on 300 Indian ophthalmologists using a 34-point spot-questionnaire in January 2005. All the doctors who responded were aware of CVS. The chief presenting symptoms were eyestrain (97.8%), headache (82.1%), tiredness and burning sensation (79.1%), watering (66.4%) and redness (61.2%). Ophthalmologists using computers reported that focusing from distance to near and vice versa (P =0.006, chi2 test), blurred vision at a distance (P =0.016, chi2 test) and blepharospasm (P =0.026, chi2 test) formed part of the syndrome. The main mode of treatment used was tear substitutes. Half of ophthalmologists (50.7%) were not prescribing any spectacles. They did not have any preference for any special type of glasses (68.7%) or spectral filters. Computer-users were more likely to prescribe sedatives/anxiolytics (P = 0.04, chi2 test), spectacles (P = 0.02, chi2 test) and conscious frequent blinking (P = 0.003, chi2 test) than the non-computer-users. All respondents were aware of CVS. Confusion regarding treatment guidelines was observed in both groups. Computer-using ophthalmologists were more informed of symptoms and diagnostic signs but were misinformed about treatment modalities.

  16. 45 CFR 310.1 - What definitions apply to this part?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... existing automated data processing computer system through an Intergovernmental Service Agreement; (4...) Office Automation means a generic adjunct component of a computer system that supports the routine... timely and satisfactory; (iv) Assurances that information in the computer system as well as access, use...

  17. 45 CFR 310.1 - What definitions apply to this part?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... existing automated data processing computer system through an Intergovernmental Service Agreement; (4...) Office Automation means a generic adjunct component of a computer system that supports the routine... timely and satisfactory; (iv) Assurances that information in the computer system as well as access, use...

  18. 45 CFR 310.1 - What definitions apply to this part?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... existing automated data processing computer system through an Intergovernmental Service Agreement; (4...) Office Automation means a generic adjunct component of a computer system that supports the routine... timely and satisfactory; (iv) Assurances that information in the computer system as well as access, use...

  19. 45 CFR 310.1 - What definitions apply to this part?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... existing automated data processing computer system through an Intergovernmental Service Agreement; (4...) Office Automation means a generic adjunct component of a computer system that supports the routine... timely and satisfactory; (iv) Assurances that information in the computer system as well as access, use...

  20. 45 CFR 310.1 - What definitions apply to this part?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... existing automated data processing computer system through an Intergovernmental Service Agreement; (4...) Office Automation means a generic adjunct component of a computer system that supports the routine... timely and satisfactory; (iv) Assurances that information in the computer system as well as access, use...

  1. Fusion of Multiple Sensing Modalities for Machine Vision

    DTIC Science & Technology

    1994-05-31

    Modeling of Non-Homogeneous 3-D Objects for Thermal and Visual Image Synthesis," Pattern Recognition, in press. U [11] Nair, Dinesh , and J. K. Aggarwal...20th AIPR Workshop: Computer Vision--Meeting the Challenges, McLean, Virginia, October 1991. Nair, Dinesh , and J. K. Aggarwal, "An Object Recognition...Computer Engineering August 1992 Sunil Gupta Ph.D. Student Mohan Kumar M.S. Student Sandeep Kumar M.S. Student Xavier Lebegue Ph.D., Computer

  2. The Implications of Pervasive Computing on Network Design

    NASA Astrophysics Data System (ADS)

    Briscoe, R.

    Mark Weiser's late-1980s vision of an age of calm technology with pervasive computing disappearing into the fabric of the world [1] has been tempered by an industry-driven vision with more of a feel of conspicuous consumption. In the modified version, everyone carries around consumer electronics to provide natural, seamless interactions both with other people and with the information world, particularly for eCommerce, but still through a pervasive computing fabric.

  3. Use of 3D vision for fine robot motion

    NASA Technical Reports Server (NTRS)

    Lokshin, Anatole; Litwin, Todd

    1989-01-01

    An integration of 3-D vision systems with robot manipulators will allow robots to operate in a poorly structured environment by visually locating targets and obstacles. However, by using computer vision for objects acquisition makes the problem of overall system calibration even more difficult. Indeed, in a CAD based manipulation a control architecture has to find an accurate mapping between the 3-D Euclidean work space and a robot configuration space (joint angles). If a stereo vision is involved, then one needs to map a pair of 2-D video images directly into the robot configuration space. Neural Network approach aside, a common solution to this problem is to calibrate vision and manipulator independently, and then tie them via common mapping into the task space. In other words, both vision and robot refer to some common Absolute Euclidean Coordinate Frame via their individual mappings. This approach has two major difficulties. First a vision system has to be calibrated over the total work space. And second, the absolute frame, which is usually quite arbitrary, has to be the same with a high degree of precision for both robot and vision subsystem calibrations. The use of computer vision to allow robust fine motion manipulation in a poorly structured world which is currently in progress is described along with the preliminary results and encountered problems.

  4. Riemann tensor of motion vision revisited.

    PubMed

    Brill, M

    2001-07-02

    This note shows that the Riemann-space interpretation of motion vision developed by Barth and Watson is neither necessary for their results, nor sufficient to handle an intrinsic coordinate problem. Recasting the Barth-Watson framework as a classical velocity-solver (as in computer vision) solves these problems.

  5. Evaluation of the Waggoner Computerized Color Vision Test.

    PubMed

    Ng, Jason S; Self, Eriko; Vanston, John E; Nguyen, Andrew L; Crognale, Michael A

    2015-04-01

    Clinical color vision evaluation has been based primarily on the same set of tests for the past several decades. Recently, computer-based color vision tests have been devised, and these have several advantages but are still not widely used. In this study, we evaluated the Waggoner Computerized Color Vision Test (CCVT), which was developed for widespread use with common computer systems. A sample of subjects with (n = 59) and without (n = 361) color vision deficiency (CVD) were tested on the CCVT, the anomaloscope, the Richmond HRR (Hardy-Rand-Rittler) (4th edition), and the Ishihara test. The CCVT was administered in two ways: (1) on a computer monitor using its default settings and (2) on one standardized to a correlated color temperature (CCT) of 6500 K. Twenty-four subjects with CVD performed the CCVT both ways. Sensitivity, specificity, and correct classification rates were determined. The screening performance of the CCVT was good (95% sensitivity, 100% specificity). The CCVT classified subjects as deutan or protan in agreement with anomaloscopy 89% of the time. It generally classified subjects as having a more severe defect compared with other tests. Results from 18 of the 24 subjects with CVD tested under both default and calibrated CCT conditions were the same, whereas the results from 6 subjects had better agreement with other test results when the CCT was set. The Waggoner CCVT is an adequate color vision screening test with several advantages and appears to provide a fairly accurate diagnosis of deficiency type. Used in conjunction with other color vision tests, it may be a useful addition to a color vision test battery.

  6. 17 CFR 38.156 - Automated trade surveillance system.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... potential trade practice violations. The automated system must load and process daily orders and trades no... anomalies; compute, retain, and compare trading statistics; compute trade gains, losses, and futures...

  7. 17 CFR 38.156 - Automated trade surveillance system.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... potential trade practice violations. The automated system must load and process daily orders and trades no... anomalies; compute, retain, and compare trading statistics; compute trade gains, losses, and futures...

  8. Education & Training for CAD/CAM: Results of a National Probability Survey. Krannert Institute Paper Series.

    ERIC Educational Resources Information Center

    Majchrzak, Ann

    A study was conducted of the training programs used by plants with Computer Automated Design/Computer Automated Manufacturing (CAD/CAM) to help their employees adapt to automated manufacturing. The study sought to determine the relative priorities of manufacturing establishments for training certain workers in certain skills; the status of…

  9. Nonanalytic Laboratory Automation: A Quarter Century of Progress.

    PubMed

    Hawker, Charles D

    2017-06-01

    Clinical laboratory automation has blossomed since the 1989 AACC meeting, at which Dr. Masahide Sasaki first showed a western audience what his laboratory had implemented. Many diagnostics and other vendors are now offering a variety of automated options for laboratories of all sizes. Replacing manual processing and handling procedures with automation was embraced by the laboratory community because of the obvious benefits of labor savings and improvement in turnaround time and quality. Automation was also embraced by the diagnostics vendors who saw automation as a means of incorporating the analyzers purchased by their customers into larger systems in which the benefits of automation were integrated to the analyzers.This report reviews the options that are available to laboratory customers. These options include so called task-targeted automation-modules that range from single function devices that automate single tasks (e.g., decapping or aliquoting) to multifunction workstations that incorporate several of the functions of a laboratory sample processing department. The options also include total laboratory automation systems that use conveyors to link sample processing functions to analyzers and often include postanalytical features such as refrigerated storage and sample retrieval.Most importantly, this report reviews a recommended process for evaluating the need for new automation and for identifying the specific requirements of a laboratory and developing solutions that can meet those requirements. The report also discusses some of the practical considerations facing a laboratory in a new implementation and reviews the concept of machine vision to replace human inspections. © 2017 American Association for Clinical Chemistry.

  10. A Graphical Operator Interface for a Telerobotic Inspection System

    NASA Technical Reports Server (NTRS)

    Kim, W. S.; Tso, K. S.; Hayati, S.

    1993-01-01

    Operator interface has recently emerged as an important element for efficient and safe operatorinteractions with the telerobotic system. Recent advances in graphical user interface (GUI) andgraphics/video merging technologies enable development of more efficient, flexible operatorinterfaces. This paper describes an advanced graphical operator interface newly developed for aremote surface inspection system at Jet Propulsion Laboratory. The interface has been designed sothat remote surface inspection can be performed by a single operator with an integrated robot controland image inspection capability. It supports three inspection strategies of teleoperated human visual inspection, human visual inspection with automated scanning, and machine-vision-based automated inspection.

  11. An operator interface design for a telerobotic inspection system

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Tso, Kam S.; Hayati, Samad

    1993-01-01

    The operator interface has recently emerged as an important element for efficient and safe interactions between human operators and telerobotics. Advances in graphical user interface and graphics technologies enable us to produce very efficient operator interface designs. This paper describes an efficient graphical operator interface design newly developed for remote surface inspection at NASA-JPL. The interface, designed so that remote surface inspection can be performed by a single operator with an integrated robot control and image inspection capability, supports three inspection strategies of teleoperated human visual inspection, human visual inspection with automated scanning, and machine-vision-based automated inspection.

  12. In-process fault detection for textile fabric production: onloom imaging

    NASA Astrophysics Data System (ADS)

    Neumann, Florian; Holtermann, Timm; Schneider, Dorian; Kulczycki, Ashley; Gries, Thomas; Aach, Til

    2011-05-01

    Constant and traceable high fabric quality is of high importance both for technical and for high-quality conventional fabrics. Usually, quality inspection is carried out by trained personal, whose detection rate and maximum period of concentration are limited. Low resolution automated fabric inspection machines using texture analysis were developed. Since 2003, systems for the in-process inspection on weaving machines ("onloom") are commercially available. With these defects can be detected, but not measured quantitative precisely. Most systems are also prone to inevitable machine vibrations. Feedback loops for fault prevention are not established. Technology has evolved since 2003: Camera and computer prices dropped, resolutions were enhanced, recording speeds increased. These are the preconditions for real-time processing of high-resolution images. So far, these new technological achievements are not used in textile fabric production. For efficient use, a measurement system must be integrated into the weaving process; new algorithms for defect detection and measurement must be developed. The goal of the joint project is the development of a modern machine vision system for nondestructive onloom fabric inspection. The system consists of a vibration-resistant machine integration, a high-resolution machine vision system, and new, reliable, and robust algorithms with quality database for defect documentation. The system is meant to detect, measure, and classify at least 80 % of economically relevant defects. Concepts for feedback loops into the weaving process will be pointed out.

  13. Microwave vision for robots

    NASA Technical Reports Server (NTRS)

    Lewandowski, Leon; Struckman, Keith

    1994-01-01

    Microwave Vision (MV), a concept originally developed in 1985, could play a significant role in the solution to robotic vision problems. Originally our Microwave Vision concept was based on a pattern matching approach employing computer based stored replica correlation processing. Artificial Neural Network (ANN) processor technology offers an attractive alternative to the correlation processing approach, namely the ability to learn and to adapt to changing environments. This paper describes the Microwave Vision concept, some initial ANN-MV experiments, and the design of an ANN-MV system that has led to a second patent disclosure in the robotic vision field.

  14. A linear programming approach to reconstructing subcellular structures from confocal images for automated generation of representative 3D cellular models.

    PubMed

    Wood, Scott T; Dean, Brian C; Dean, Delphine

    2013-04-01

    This paper presents a novel computer vision algorithm to analyze 3D stacks of confocal images of fluorescently stained single cells. The goal of the algorithm is to create representative in silico model structures that can be imported into finite element analysis software for mechanical characterization. Segmentation of cell and nucleus boundaries is accomplished via standard thresholding methods. Using novel linear programming methods, a representative actin stress fiber network is generated by computing a linear superposition of fibers having minimum discrepancy compared with an experimental 3D confocal image. Qualitative validation is performed through analysis of seven 3D confocal image stacks of adherent vascular smooth muscle cells (VSMCs) grown in 2D culture. The presented method is able to automatically generate 3D geometries of the cell's boundary, nucleus, and representative F-actin network based on standard cell microscopy data. These geometries can be used for direct importation and implementation in structural finite element models for analysis of the mechanics of a single cell to potentially speed discoveries in the fields of regenerative medicine, mechanobiology, and drug discovery. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Image-based query-by-example for big databases of galaxy images

    NASA Astrophysics Data System (ADS)

    Shamir, Lior; Kuminski, Evan

    2017-01-01

    Very large astronomical databases containing millions or even billions of galaxy images have been becoming increasingly important tools in astronomy research. However, in many cases the very large size makes it more difficult to analyze these data manually, reinforcing the need for computer algorithms that can automate the data analysis process. An example of such task is the identification of galaxies of a certain morphology of interest. For instance, if a rare galaxy is identified it is reasonable to expect that more galaxies of similar morphology exist in the database, but it is virtually impossible to manually search these databases to identify such galaxies. Here we describe computer vision and pattern recognition methodology that receives a galaxy image as an input, and searches automatically a large dataset of galaxies to return a list of galaxies that are visually similar to the query galaxy. The returned list is not necessarily complete or clean, but it provides a substantial reduction of the original database into a smaller dataset, in which the frequency of objects visually similar to the query galaxy is much higher. Experimental results show that the algorithm can identify rare galaxies such as ring galaxies among datasets of 10,000 astronomical objects.

  16. Using satellite communications for a mobile computer network

    NASA Technical Reports Server (NTRS)

    Wyman, Douglas J.

    1993-01-01

    The topics discussed include the following: patrol car automation, mobile computer network, network requirements, network design overview, MCN mobile network software, MCN hub operation, mobile satellite software, hub satellite software, the benefits of patrol car automation, the benefits of satellite mobile computing, and national law enforcement satellite.

  17. Real Time Target Tracking Using Dedicated Vision Hardware

    NASA Astrophysics Data System (ADS)

    Kambies, Keith; Walsh, Peter

    1988-03-01

    This paper describes a real-time vision target tracking system developed by Adaptive Automation, Inc. and delivered to NASA's Launch Equipment Test Facility, Kennedy Space Center, Florida. The target tracking system is part of the Robotic Application Development Laboratory (RADL) which was designed to provide NASA with a general purpose robotic research and development test bed for the integration of robot and sensor systems. One of the first RADL system applications is the closing of a position control loop around a six-axis articulated arm industrial robot using a camera and dedicated vision processor as the input sensor so that the robot can locate and track a moving target. The vision system is inside of the loop closure of the robot tracking system, therefore, tight throughput and latency constraints are imposed on the vision system that can only be met with specialized hardware and a concurrent approach to the processing algorithms. State of the art VME based vision boards capable of processing the image at frame rates were used with a real-time, multi-tasking operating system to achieve the performance required. This paper describes the high speed vision based tracking task, the system throughput requirements, the use of dedicated vision hardware architecture, and the implementation design details. Important to the overall philosophy of the complete system was the hierarchical and modular approach applied to all aspects of the system, hardware and software alike, so there is special emphasis placed on this topic in the paper.

  18. A dental vision system for accurate 3D tooth modeling.

    PubMed

    Zhang, Li; Alemzadeh, K

    2006-01-01

    This paper describes an active vision system based reverse engineering approach to extract the three-dimensional (3D) geometric information from dental teeth and transfer this information into Computer-Aided Design/Computer-Aided Manufacture (CAD/CAM) systems to improve the accuracy of 3D teeth models and at the same time improve the quality of the construction units to help patient care. The vision system involves the development of a dental vision rig, edge detection, boundary tracing and fast & accurate 3D modeling from a sequence of sliced silhouettes of physical models. The rig is designed using engineering design methods such as a concept selection matrix and weighted objectives evaluation chart. Reconstruction results and accuracy evaluation are presented on digitizing different teeth models.

  19. Characterization of the optic disc in retinal imagery using a probabilistic approach

    NASA Astrophysics Data System (ADS)

    Tobin, Kenneth W., Jr.; Chaum, Edward; Govindasamy, V. P.; Karnowski, Thomas P.; Sezer, Omer

    2006-03-01

    The application of computer based image analysis to the diagnosis of retinal disease is rapidly becoming a reality due to the broad-based acceptance of electronic imaging devices throughout the medical community and through the collection and accumulation of large patient histories in picture archiving and communications systems. Advances in the imaging of ocular anatomy and pathology can now provide data to diagnose and quantify specific diseases such as diabetic retinopathy (DR). Visual disability and blindness have a profound socioeconomic impact upon the diabetic population and DR is the leading cause of new blindness in working-age adults in the industrialized world. To reduce the impact of diabetes on vision loss, robust automation is required to achieve productive computer-based screening of large at-risk populations at lower cost. Through this research we are developing automation methods for locating and characterizing important structures in the human retina such as the vascular arcades, optic nerve, macula, and lesions. In this paper we present results for the automatic detection of the optic nerve using digital red-free fundus photography. Our method relies on the accurate segmentation of the vasculature of the retina along with spatial probability distributions describing the luminance across the retina and the density, average thickness, and average orientation of the vasculature in relation to the position of the optic nerve. With these features and other prior knowledge, we predict the location of the optic nerve in the retina using a two-class, Bayesian classifier. We report 81% detection performance on a broad range of red-free fundus images representing a population of over 345 patients with 19 different pathologies associated with DR.

  20. Using Remotely Sensed Data to Automate and Improve Census Bureau Update Activities

    NASA Astrophysics Data System (ADS)

    Desch, A., IV

    2017-12-01

    Location of established and new housing structures is fundamental in the Census Bureau's planning and execution of each decennial census. Past Census address list compilation and update programs have involved sending more than 100,000 workers into the field to find and verify housing units. The 2020 Census program has introduced an imagery based In-Office Address Canvassing Interactive Review (IOAC-IR) program in an attempt to reduce the in-field workload. The human analyst driven, aerial image based IOAC-IR operation has proven to be a cost effective and accurate substitute for a large portion of the expensive in-field address canvassing operations. However, the IOAC-IR still required more than a year to complete and over 100 full-time dedicated employees. Much of the basic image analysis work done in IOAC-IR can be handled with established remote sensing and computer vision techniques. The experience gained from the Interactive Review phase of In-Office Address Canvassing has led to the development of a prototype geo-processing tool to automate much of this process for future and ongoing Address Canvassing operations. This prototype utilizes high-resolution aerial imagery and LiDAR to identify structures and compare their location to existing Census geographic information. In this presentation, we report on the comparison of this exploratory system's results to the human based IOAC-IR. The experimental image and LiDAR based change detection approach has itself led to very promising follow-on experiments utilizing very current, high repeat datasets and scalable cloud computing. We will discuss how these new techniques can be used to both aid the US Census Bureau meet its goals of identify all the housing units in the US as well as aid developing countries better identify where there population is currently distributed.

  1. Labour-efficient in vitro lymphocyte population tracking and fate prediction using automation and manual review.

    PubMed

    Chakravorty, Rajib; Rawlinson, David; Zhang, Alan; Markham, John; Dowling, Mark R; Wellard, Cameron; Zhou, Jie H S; Hodgkin, Philip D

    2014-01-01

    Interest in cell heterogeneity and differentiation has recently led to increased use of time-lapse microscopy. Previous studies have shown that cell fate may be determined well in advance of the event. We used a mixture of automation and manual review of time-lapse live cell imaging to track the positions, contours, divisions, deaths and lineage of 44 B-lymphocyte founders and their 631 progeny in vitro over a period of 108 hours. Using this data to train a Support Vector Machine classifier, we were retrospectively able to predict the fates of individual lymphocytes with more than 90% accuracy, using only time-lapse imaging captured prior to mitosis or death of 90% of all cells. The motivation for this paper is to explore the impact of labour-efficient assistive software tools that allow larger and more ambitious live-cell time-lapse microscopy studies. After training on this data, we show that machine learning methods can be used for realtime prediction of individual cell fates. These techniques could lead to realtime cell culture segregation for purposes such as phenotype screening. We were able to produce a large volume of data with less effort than previously reported, due to the image processing, computer vision, tracking and human-computer interaction tools used. We describe the workflow of the software-assisted experiments and the graphical interfaces that were needed. To validate our results we used our methods to reproduce a variety of published data about lymphocyte populations and behaviour. We also make all our data publicly available, including a large quantity of lymphocyte spatio-temporal dynamics and related lineage information.

  2. Spinoff 2010

    NASA Technical Reports Server (NTRS)

    2010-01-01

    Topics covered include: Burnishing Techniques Strengthen Hip Implants; Signal Processing Methods Monitor Cranial Pressure; Ultraviolet-Blocking Lenses Protect, Enhance Vision; Hyperspectral Systems Increase Imaging Capabilities; Programs Model the Future of Air Traffic Management; Tail Rotor Airfoils Stabilize Helicopters, Reduce Noise; Personal Aircraft Point to the Future of Transportation; Ducted Fan Designs Lead to Potential New Vehicles; Winglets Save Billions of Dollars in Fuel Costs; Sensor Systems Collect Critical Aerodynamics Data; Coatings Extend Life of Engines and Infrastructure; Radiometers Optimize Local Weather Prediction; Energy-Efficient Systems Eliminate Icing Danger for UAVs; Rocket-Powered Parachutes Rescue Entire Planes; Technologies Advance UAVs for Science, Military; Inflatable Antennas Support Emergency Communication; Smart Sensors Assess Structural Health; Hand-Held Devices Detect Explosives and Chemical Agents; Terahertz Tools Advance Imaging for Security, Industry; LED Systems Target Plant Growth; Aerogels Insulate Against Extreme Temperatures; Image Sensors Enhance Camera Technologies; Lightweight Material Patches Allow for Quick Repairs; Nanomaterials Transform Hairstyling Tools; Do-It-Yourself Additives Recharge Auto Air Conditioning; Systems Analyze Water Quality in Real Time; Compact Radiometers Expand Climate Knowledge; Energy Servers Deliver Clean, Affordable Power; Solutions Remediate Contaminated Groundwater; Bacteria Provide Cleanup of Oil Spills, Wastewater; Reflective Coatings Protect People and Animals; Innovative Techniques Simplify Vibration Analysis; Modeling Tools Predict Flow in Fluid Dynamics; Verification Tools Secure Online Shopping, Banking; Toolsets Maintain Health of Complex Systems; Framework Resources Multiply Computing Power; Tools Automate Spacecraft Testing, Operation; GPS Software Packages Deliver Positioning Solutions; Solid-State Recorders Enhance Scientific Data Collection; Computer Models Simulate Fine Particle Dispersion; Composite Sandwich Technologies Lighten Components; Cameras Reveal Elements in the Short Wave Infrared; Deformable Mirrors Correct Optical Distortions; Stitching Techniques Advance Optics Manufacturing; Compact, Robust Chips Integrate Optical Functions; Fuel Cell Stations Automate Processes, Catalyst Testing; Onboard Systems Record Unique Videos of Space Missions; Space Research Results Purify Semiconductor Materials; and Toolkits Control Motion of Complex Robotics.

  3. Research on an autonomous vision-guided helicopter

    NASA Technical Reports Server (NTRS)

    Amidi, Omead; Mesaki, Yuji; Kanade, Takeo

    1994-01-01

    Integration of computer vision with on-board sensors to autonomously fly helicopters was researched. The key components developed were custom designed vision processing hardware and an indoor testbed. The custom designed hardware provided flexible integration of on-board sensors with real-time image processing resulting in a significant improvement in vision-based state estimation. The indoor testbed provided convenient calibrated experimentation in constructing real autonomous systems.

  4. Reliable vision-guided grasping

    NASA Technical Reports Server (NTRS)

    Nicewarner, Keith E.; Kelley, Robert B.

    1992-01-01

    Automated assembly of truss structures in space requires vision-guided servoing for grasping a strut when its position and orientation are uncertain. This paper presents a methodology for efficient and robust vision-guided robot grasping alignment. The vision-guided grasping problem is related to vision-guided 'docking' problems. It differs from other hand-in-eye visual servoing problems, such as tracking, in that the distance from the target is a relevant servo parameter. The methodology described in this paper is hierarchy of levels in which the vision/robot interface is decreasingly 'intelligent,' and increasingly fast. Speed is achieved primarily by information reduction. This reduction exploits the use of region-of-interest windows in the image plane and feature motion prediction. These reductions invariably require stringent assumptions about the image. Therefore, at a higher level, these assumptions are verified using slower, more reliable methods. This hierarchy provides for robust error recovery in that when a lower-level routine fails, the next-higher routine will be called and so on. A working system is described which visually aligns a robot to grasp a cylindrical strut. The system uses a single camera mounted on the end effector of a robot and requires only crude calibration parameters. The grasping procedure is fast and reliable, with a multi-level error recovery system.

  5. Real-time unconstrained object recognition: a processing pipeline based on the mammalian visual system.

    PubMed

    Aguilar, Mario; Peot, Mark A; Zhou, Jiangying; Simons, Stephen; Liao, Yuwei; Metwalli, Nader; Anderson, Mark B

    2012-03-01

    The mammalian visual system is still the gold standard for recognition accuracy, flexibility, efficiency, and speed. Ongoing advances in our understanding of function and mechanisms in the visual system can now be leveraged to pursue the design of computer vision architectures that will revolutionize the state of the art in computer vision.

  6. Computer Vision Systems for Hardwood Logs and Lumber

    Treesearch

    Philip A. Araman; Tai-Hoon Cho; D. Zhu; R. Conners

    1991-01-01

    Computer vision systems being developed at Virginia Tech University with the support and cooperation from the U.S. Forest Service are presented. Researchers at Michigan State University, West Virginia University, and Mississippi State University are also members of the research team working on various parts of this research. Our goals are to help U.S. hardwood...

  7. Automating the Analytical Laboratories Section, Lewis Research Center, National Aeronautics and Space Administration: A feasibility study

    NASA Technical Reports Server (NTRS)

    Boyle, W. G.; Barton, G. W.

    1979-01-01

    The feasibility of computerized automation of the Analytical Laboratories Section at NASA's Lewis Research Center was considered. Since that laboratory's duties are not routine, the automation goals were set with that in mind. Four instruments were selected as the most likely automation candidates: an atomic absorption spectrophotometer, an emission spectrometer, an X-ray fluorescence spectrometer, and an X-ray diffraction unit. Two options for computer automation were described: a time-shared central computer and a system with microcomputers for each instrument connected to a central computer. A third option, presented for future planning, expands the microcomputer version. Costs and benefits for each option were considered. It was concluded that the microcomputer version best fits the goals and duties of the laboratory and that such an automted system is needed to meet the laboratory's future requirements.

  8. Quantification of color vision using a tablet display.

    PubMed

    Chacon, Alicia; Rabin, Jeff; Yu, Dennis; Johnston, Shawn; Bradshaw, Timothy

    2015-01-01

    Accurate color vision is essential for optimal performance in aviation and space environments using nonredundant color coding to convey critical information. Most color tests detect color vision deficiency (CVD) but fail to diagnose type or severity of CVD, which are important to link performance to occupational demands. The computer-based Cone Contrast Test (CCT) diagnoses type and severity of CVD. It is displayed on a netbook computer for clinical application, but a more portable version may prove useful for deployments, space and aviation cockpits, as well as accident and sports medicine settings. Our purpose was to determine if the CCT can be conducted on a tablet display (Windows 8, Microsoft, Seattle, WA) using touch-screen response input. The CCT presents colored letters visible only to red (R), green (G), and blue (B) sensitive retinal cones to determine the lowest R, G, and B cone contrast visible to the observer. The CCT was measured in 16 color vision normals (CVN) and 16 CVDs using the standard netbook computer and a Windows 8 tablet display calibrated to produce equal color contrasts. Both displays showed 100% specificity for confirming CVN and 100% sensitivity for detecting CVD. In CVNs there was no difference between scores on netbook vs. tablet displays. G cone CVDs showed slightly lower G cone CCT scores on the tablet. CVD can be diagnosed with a tablet display. Ease-of-use, portability, and complete computer capabilities make tablets ideal for multiple settings, including aviation, space, military deployments, accidents and rescue missions, and sports vision. Chacon A, Rabin J, Yu D, Johnston S, Bradshaw T. Quantification of color vision using a tablet display.

  9. Automated Essay Scoring

    ERIC Educational Resources Information Center

    Dikli, Semire

    2006-01-01

    The impacts of computers on writing have been widely studied for three decades. Even basic computers functions, i.e. word processing, have been of great assistance to writers in modifying their essays. The research on Automated Essay Scoring (AES) has revealed that computers have the capacity to function as a more effective cognitive tool (Attali,…

  10. Computer-Aided Instruction in Automated Instrumentation.

    ERIC Educational Resources Information Center

    Stephenson, David T.

    1986-01-01

    Discusses functions of automated instrumentation systems, i.e., systems which combine electrical measuring instruments and a controlling computer to measure responses of a unit under test. The computer-assisted tutorial then described is programmed for use on such a system--a modern microwave spectrum analyzer--to introduce engineering students to…

  11. VTLS Inc.: The Company, the Products, the Services, the Vision.

    ERIC Educational Resources Information Center

    Chachra, Vinod; And Others

    1993-01-01

    Describes the range of products and services offered by VTLS, a company that offers comprehensive, integrated library automation software and customer support. VTLS's growth and development in the United States and abroad is described, and nine sidebar articles detail system features and applications in public, academic, and virtual libraries. (20…

  12. Scanning System -- Technology Worth a Look

    Treesearch

    Philip A. Araman; Daniel L. Schmoldt; Richard W. Conners; D. Earl Kline

    1995-01-01

    In an effort to help automate the inspection for lumber defects, optical scanning systems are emerging as an alternative to the human eye. Although still in its infancy, scanning technology is being explored by machine companies and universities. This article was excerpted from "Machine Vision Systems for Grading and Processing Hardwood Lumber," by Philip...

  13. Intelligent Color Vision System for Ripeness Classification of Oil Palm Fresh Fruit Bunch

    PubMed Central

    Fadilah, Norasyikin; Mohamad-Saleh, Junita; Halim, Zaini Abdul; Ibrahim, Haidi; Ali, Syed Salim Syed

    2012-01-01

    Ripeness classification of oil palm fresh fruit bunches (FFBs) during harvesting is important to ensure that they are harvested during optimum stage for maximum oil production. This paper presents the application of color vision for automated ripeness classification of oil palm FFB. Images of oil palm FFBs of type DxP Yangambi were collected and analyzed using digital image processing techniques. Then the color features were extracted from those images and used as the inputs for Artificial Neural Network (ANN) learning. The performance of the ANN for ripeness classification of oil palm FFB was investigated using two methods: training ANN with full features and training ANN with reduced features based on the Principal Component Analysis (PCA) data reduction technique. Results showed that compared with using full features in ANN, using the ANN trained with reduced features can improve the classification accuracy by 1.66% and is more effective in developing an automated ripeness classifier for oil palm FFB. The developed ripeness classifier can act as a sensor in determining the correct oil palm FFB ripeness category. PMID:23202043

  14. Intelligent color vision system for ripeness classification of oil palm fresh fruit bunch.

    PubMed

    Fadilah, Norasyikin; Mohamad-Saleh, Junita; Abdul Halim, Zaini; Ibrahim, Haidi; Syed Ali, Syed Salim

    2012-10-22

    Ripeness classification of oil palm fresh fruit bunches (FFBs) during harvesting is important to ensure that they are harvested during optimum stage for maximum oil production. This paper presents the application of color vision for automated ripeness classification of oil palm FFB. Images of oil palm FFBs of type DxP Yangambi were collected and analyzed using digital image processing techniques. Then the color features were extracted from those images and used as the inputs for Artificial Neural Network (ANN) learning. The performance of the ANN for ripeness classification of oil palm FFB was investigated using two methods: training ANN with full features and training ANN with reduced features based on the Principal Component Analysis (PCA) data reduction technique. Results showed that compared with using full features in ANN, using the ANN trained with reduced features can improve the classification accuracy by 1.66% and is more effective in developing an automated ripeness classifier for oil palm FFB. The developed ripeness classifier can act as a sensor in determining the correct oil palm FFB ripeness category.

  15. An Analysis for Capital Expenditure Decisions at a Naval Regional Medical Center.

    DTIC Science & Technology

    1981-12-01

    Service Equipment Review Committee 1. Portable defibrilator Computed tomographic scanner and cardioscope 2. ECG cart Automated blood cell counter 3. Gas...system sterilizer Gas system sterilizer 4. Automated blood cell Portable defibrilator and counter cardioscope 5. Computed tomographic ECG cart scanner...dictating and automated typing) systems. e. Filing equipment f. Automatic data processing equipment including data communications equipment. g

  16. Semi-automated Digital Imaging and Processing System for Measuring Lake Ice Thickness

    NASA Astrophysics Data System (ADS)

    Singh, Preetpal

    Canada is home to thousands of freshwater lakes and rivers. Apart from being sources of infinite natural beauty, rivers and lakes are an important source of water, food and transportation. The northern hemisphere of Canada experiences extreme cold temperatures in the winter resulting in a freeze up of regional lakes and rivers. Frozen lakes and rivers tend to offer unique opportunities in terms of wildlife harvesting and winter transportation. Ice roads built on frozen rivers and lakes are vital supply lines for industrial operations in the remote north. Monitoring the ice freeze-up and break-up dates annually can help predict regional climatic changes. Lake ice impacts a variety of physical, ecological and economic processes. The construction and maintenance of a winter road can cost millions of dollars annually. A good understanding of ice mechanics is required to build and deem an ice road safe. A crucial factor in calculating load bearing capacity of ice sheets is the thickness of ice. Construction costs are mainly attributed to producing and maintaining a specific thickness and density of ice that can support different loads. Climate change is leading to warmer temperatures causing the ice to thin faster. At a certain point, a winter road may not be thick enough to support travel and transportation. There is considerable interest in monitoring winter road conditions given the high construction and maintenance costs involved. Remote sensing technologies such as Synthetic Aperture Radar have been successfully utilized to study the extent of ice covers and record freeze-up and break-up dates of ice on lakes and rivers across the north. Ice road builders often used Ultrasound equipment to measure ice thickness. However, an automated monitoring system, based on machine vision and image processing technology, which can measure ice thickness on lakes has not been thought of. Machine vision and image processing techniques have successfully been used in manufacturing to detect equipment failure and identify defective products at the assembly line. The research work in this thesis combines machine vision and image processing technology to build a digital imaging and processing system for monitoring and measuring lake ice thickness in real time. An ultra-compact USB camera is programmed to acquire and transmit high resolution imagery for processing with MATLAB Image Processing toolbox. The image acquisition and transmission process is fully automated; image analysis is semi-automated and requires limited user input. Potential design changes to the prototype and ideas on fully automating the imaging and processing procedure are presented to conclude this research work.

  17. Accuracy and efficiency of computer-aided anatomical analysis using 3D visualization software based on semi-automated and automated segmentations.

    PubMed

    An, Gao; Hong, Li; Zhou, Xiao-Bing; Yang, Qiong; Li, Mei-Qing; Tang, Xiang-Yang

    2017-03-01

    We investigated and compared the functionality of two 3D visualization software provided by a CT vendor and a third-party vendor, respectively. Using surgical anatomical measurement as baseline, we evaluated the accuracy of 3D visualization and verified their utility in computer-aided anatomical analysis. The study cohort consisted of 50 adult cadavers fixed with the classical formaldehyde method. The computer-aided anatomical analysis was based on CT images (in DICOM format) acquired by helical scan with contrast enhancement, using a CT vendor provided 3D visualization workstation (Syngo) and a third-party 3D visualization software (Mimics) that was installed on a PC. Automated and semi-automated segmentations were utilized in the 3D visualization workstation and software, respectively. The functionality and efficiency of automated and semi-automated segmentation methods were compared. Using surgical anatomical measurement as a baseline, the accuracy of 3D visualization based on automated and semi-automated segmentations was quantitatively compared. In semi-automated segmentation, the Mimics 3D visualization software outperformed the Syngo 3D visualization workstation. No significant difference was observed in anatomical data measurement by the Syngo 3D visualization workstation and the Mimics 3D visualization software (P>0.05). Both the Syngo 3D visualization workstation provided by a CT vendor and the Mimics 3D visualization software by a third-party vendor possessed the needed functionality, efficiency and accuracy for computer-aided anatomical analysis. Copyright © 2016 Elsevier GmbH. All rights reserved.

  18. Review On Applications Of Neural Network To Computer Vision

    NASA Astrophysics Data System (ADS)

    Li, Wei; Nasrabadi, Nasser M.

    1989-03-01

    Neural network models have many potential applications to computer vision due to their parallel structures, learnability, implicit representation of domain knowledge, fault tolerance, and ability of handling statistical data. This paper demonstrates the basic principles, typical models and their applications in this field. Variety of neural models, such as associative memory, multilayer back-propagation perceptron, self-stabilized adaptive resonance network, hierarchical structured neocognitron, high order correlator, network with gating control and other models, can be applied to visual signal recognition, reinforcement, recall, stereo vision, motion, object tracking and other vision processes. Most of the algorithms have been simulated on com-puters. Some have been implemented with special hardware. Some systems use features, such as edges and profiles, of images as the data form for input. Other systems use raw data as input signals to the networks. We will present some novel ideas contained in these approaches and provide a comparison of these methods. Some unsolved problems are mentioned, such as extracting the intrinsic properties of the input information, integrating those low level functions to a high-level cognitive system, achieving invariances and other problems. Perspectives of applications of some human vision models and neural network models are analyzed.

  19. Computer vision-based classification of hand grip variations in neurorehabilitation.

    PubMed

    Zariffa, José; Steeves, John D

    2011-01-01

    The complexity of hand function is such that most existing upper limb rehabilitation robotic devices use only simplified hand interfaces. This is in contrast to the importance of the hand in regaining function after neurological injury. Computer vision technology has been used to identify hand posture in the field of Human Computer Interaction, but this approach has not been translated to the rehabilitation context. We describe a computer vision-based classifier that can be used to discriminate rehabilitation-relevant hand postures, and could be integrated into a virtual reality-based upper limb rehabilitation system. The proposed system was tested on a set of video recordings from able-bodied individuals performing cylindrical grasps, lateral key grips, and tip-to-tip pinches. The overall classification success rate was 91.2%, and was above 98% for 6 out of the 10 subjects. © 2011 IEEE

  20. Multi-Center Evaluation of the Automated Immunohematology Instrument, the ORTHO VISION Analyzer.

    PubMed

    Aysola, Agnes; Wheeler, Leslie; Brown, Richard; Denham, Rebecca; Colavecchia, Connie; Pavenski, Katerina; Krok, Elizabeth; Hayes, Chelsea; Klapper, Ellen

    2017-02-01

    ORTHO VISION Analyzer (Vision), is an immunohematology instrument using ID-MT gel card technology with digital image processing. It has a continuous, random sample access with STAT priority processing. The efficiency and ease of operation of Vision was evaluated at 5 medical centers. De-identified patient samples were tested on the ORTHO ProVue Analyzer (ProVue) and repeated on the Vision mimicking the daily workload pattern. Turnaround times (TAT) were collected and compared. Operators rated key features of the analyzer on a scale of 1 to 5. A total of 507 samples were tested on both instruments at the 5 trial sites. The mean TAT (SD) were 31.6 minutes (5.5) with Vision and 35.7 minutes (8.4) with ProVue, which renders a 12% reduction. Type and screens were performed on 381 samples; the mean TAT (SD) was 32.2 minutes (4.5) with Vision and 37.0 minutes (7.4) with ProVue. Antibody identification with eleven panel cells was performed on 134 samples on Vision; TAT (SD) was 43.2 minutes (8.3). The installation, training, configuration, maintenance and validation processes are all streamlined to provide a short implementation time. The average rating of main functions by the operators was 4.1 to 4.8. Opportunities for improvement, such as flexibility with editing QC results, maintenance schedule, and printing options were identified. The capabilities to perform serial dilutions, to accept pediatric tubes, and review results by e-Connectivity are enhancements over the ProVue. Vision provides shorter TAT compared to ProVue. Every site described a positive experience using Vision. © American Society for Clinical Pathology, 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

Top