Sample records for computer vision cv

  1. Stereo Vision Inside Tire

    DTIC Science & Technology

    2015-08-21

    using the Open Computer Vision ( OpenCV ) libraries [6] for computer vision and the Qt library [7] for the user interface. The software has the...depth. The software application calibrates the cameras using the plane based calibration model from the OpenCV calib3D module and allows the...6] OpenCV . 2015. OpenCV Open Source Computer Vision. [Online]. Available at: opencv.org [Accessed]: 09/01/2015. [7] Qt. 2015. Qt Project home

  2. Software for Real-Time Analysis of Subsonic Test Shot Accuracy

    DTIC Science & Technology

    2014-03-01

    used the C++ programming language, the Open Source Computer Vision ( OpenCV ®) software library, and Microsoft Windows® Application Programming...video for comparison through OpenCV image analysis tools. Based on the comparison, the software then computed the coordinates of each shot relative to...DWB researchers wanted to use the Open Source Computer Vision ( OpenCV ) software library for capturing and analyzing frames of video. OpenCV contains

  3. Range Image Flow using High-Order Polynomial Expansion

    DTIC Science & Technology

    2013-09-01

    included as a default algorithm in the OpenCV library [2]. The research of estimating the motion between range images, or range flow, is much more...Journal of Computer Vision, vol. 92, no. 1, pp. 1‒31. 2. G. Bradski and A. Kaehler. 2008. Learning OpenCV : Computer Vision with the OpenCV Library

  4. Heterogeneous compute in computer vision: OpenCL in OpenCV

    NASA Astrophysics Data System (ADS)

    Gasparakis, Harris

    2014-02-01

    We explore the relevance of Heterogeneous System Architecture (HSA) in Computer Vision, both as a long term vision, and as a near term emerging reality via the recently ratified OpenCL 2.0 Khronos standard. After a brief review of OpenCL 1.2 and 2.0, including HSA features such as Shared Virtual Memory (SVM) and platform atomics, we identify what genres of Computer Vision workloads stand to benefit by leveraging those features, and we suggest a new mental framework that replaces GPU compute with hybrid HSA APU compute. As a case in point, we discuss, in some detail, popular object recognition algorithms (part-based models), emphasizing the interplay and concurrent collaboration between the GPU and CPU. We conclude by describing how OpenCL has been incorporated in OpenCV, a popular open source computer vision library, emphasizing recent work on the Transparent API, to appear in OpenCV 3.0, which unifies the native CPU and OpenCL execution paths under a single API, allowing the same code to execute either on CPU or on a OpenCL enabled device, without even recompiling.

  5. AstroCV: Astronomy computer vision library

    NASA Astrophysics Data System (ADS)

    González, Roberto E.; Muñoz, Roberto P.; Hernández, Cristian A.

    2018-04-01

    AstroCV processes and analyzes big astronomical datasets, and is intended to provide a community repository of high performance Python and C++ algorithms used for image processing and computer vision. The library offers methods for object recognition, segmentation and classification, with emphasis in the automatic detection and classification of galaxies.

  6. A real-time camera calibration system based on OpenCV

    NASA Astrophysics Data System (ADS)

    Zhang, Hui; Wang, Hua; Guo, Huinan; Ren, Long; Zhou, Zuofeng

    2015-07-01

    Camera calibration is one of the essential steps in the computer vision research. This paper describes a real-time OpenCV based camera calibration system, and developed and implemented in the VS2008 environment. Experimental results prove that the system to achieve a simple and fast camera calibration, compared with MATLAB, higher precision and does not need manual intervention, and can be widely used in various computer vision system.

  7. Camera calibration method of binocular stereo vision based on OpenCV

    NASA Astrophysics Data System (ADS)

    Zhong, Wanzhen; Dong, Xiaona

    2015-10-01

    Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.

  8. PlantCV v2: Image analysis software for high-throughput plant phenotyping

    PubMed Central

    Abbasi, Arash; Berry, Jeffrey C.; Callen, Steven T.; Chavez, Leonardo; Doust, Andrew N.; Feldman, Max J.; Gilbert, Kerrigan B.; Hodge, John G.; Hoyer, J. Steen; Lin, Andy; Liu, Suxing; Lizárraga, César; Lorence, Argelia; Miller, Michael; Platon, Eric; Tessman, Monica; Sax, Tony

    2017-01-01

    Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here we present the details and rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning. PMID:29209576

  9. PlantCV v2: Image analysis software for high-throughput plant phenotyping.

    PubMed

    Gehan, Malia A; Fahlgren, Noah; Abbasi, Arash; Berry, Jeffrey C; Callen, Steven T; Chavez, Leonardo; Doust, Andrew N; Feldman, Max J; Gilbert, Kerrigan B; Hodge, John G; Hoyer, J Steen; Lin, Andy; Liu, Suxing; Lizárraga, César; Lorence, Argelia; Miller, Michael; Platon, Eric; Tessman, Monica; Sax, Tony

    2017-01-01

    Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here we present the details and rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning.

  10. PlantCV v2: Image analysis software for high-throughput plant phenotyping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gehan, Malia A.; Fahlgren, Noah; Abbasi, Arash

    Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here in this paper we present the details andmore » rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning.« less

  11. PlantCV v2: Image analysis software for high-throughput plant phenotyping

    DOE PAGES

    Gehan, Malia A.; Fahlgren, Noah; Abbasi, Arash; ...

    2017-12-01

    Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here in this paper we present the details andmore » rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning.« less

  12. Performance of computer vision in vivo flow cytometry with low fluorescence contrast

    NASA Astrophysics Data System (ADS)

    Markovic, Stacey; Li, Siyuan; Niedre, Mark

    2015-03-01

    Detection and enumeration of circulating cells in the bloodstream of small animals are important in many areas of preclinical biomedical research, including cancer metastasis, immunology, and reproductive medicine. Optical in vivo flow cytometry (IVFC) represents a class of technologies that allow noninvasive and continuous enumeration of circulating cells without drawing blood samples. We recently developed a technique termed computer vision in vivo flow cytometry (CV-IVFC) that uses a high-sensitivity fluorescence camera and an automated computer vision algorithm to interrogate relatively large circulating blood volumes in the ear of a mouse. We detected circulating cells at concentrations as low as 20 cells/mL. In the present work, we characterized the performance of CV-IVFC with low-contrast imaging conditions with (1) weak cell fluorescent labeling using cell-simulating fluorescent microspheres with varying brightness and (2) high background tissue autofluorescence by varying autofluorescence properties of optical phantoms. Our analysis indicates that CV-IVFC can robustly track and enumerate circulating cells with at least 50% sensitivity even in conditions with two orders of magnitude degraded contrast than our previous in vivo work. These results support the significant potential utility of CV-IVFC in a wide range of in vivo biological models.

  13. Projector-Camera Systems for Immersive Training

    DTIC Science & Technology

    2006-01-01

    average to a sequence of 100 captured distortion corrected images. The OpenCV library [ OpenCV ] was used for camera calibration. To correct for...rendering application [Treskunov, Pair, and Swartout, 2004]. It was transposed to take into account different matrix conventions between OpenCV and...Screen Imperfections. Proc. Workshop on Projector-Camera Systems (PROCAMS), Nice, France, IEEE. OpenCV : Open Source Computer Vision. [Available

  14. Performance of computer vision in vivo flow cytometry with low fluorescence contrast

    PubMed Central

    Markovic, Stacey; Li, Siyuan; Niedre, Mark

    2015-01-01

    Abstract. Detection and enumeration of circulating cells in the bloodstream of small animals are important in many areas of preclinical biomedical research, including cancer metastasis, immunology, and reproductive medicine. Optical in vivo flow cytometry (IVFC) represents a class of technologies that allow noninvasive and continuous enumeration of circulating cells without drawing blood samples. We recently developed a technique termed computer vision in vivo flow cytometry (CV-IVFC) that uses a high-sensitivity fluorescence camera and an automated computer vision algorithm to interrogate relatively large circulating blood volumes in the ear of a mouse. We detected circulating cells at concentrations as low as 20  cells/mL. In the present work, we characterized the performance of CV-IVFC with low-contrast imaging conditions with (1) weak cell fluorescent labeling using cell-simulating fluorescent microspheres with varying brightness and (2) high background tissue autofluorescence by varying autofluorescence properties of optical phantoms. Our analysis indicates that CV-IVFC can robustly track and enumerate circulating cells with at least 50% sensitivity even in conditions with two orders of magnitude degraded contrast than our previous in vivo work. These results support the significant potential utility of CV-IVFC in a wide range of in vivo biological models. PMID:25822954

  15. VibroCV: a computer vision-based vibroarthrography platform with possible application to Juvenile Idiopathic Arthritis.

    PubMed

    Wiens, Andrew D; Prahalad, Sampath; Inan, Omer T

    2016-08-01

    Vibroarthrography, a method for interpreting the sounds emitted by a knee during movement, has been studied for several joint disorders since 1902. However, to our knowledge, the usefulness of this method for management of Juvenile Idiopathic Arthritis (JIA) has not been investigated. To study joint sounds as a possible new biomarker for pediatric cases of JIA we designed and built VibroCV, a platform to capture vibroarthrograms from four accelerometers; electromyograms (EMG) and inertial measurements from four wireless EMG modules; and joint angles from two Sony Eye cameras and six light-emitting diodes with commercially-available off-the-shelf parts and computer vision via OpenCV. This article explains the design of this turn-key platform in detail, and provides a sample recording captured from a pediatric subject.

  16. The Use of Computer Vision Algorithms for Automatic Orientation of Terrestrial Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Markiewicz, Jakub Stefan

    2016-06-01

    The paper presents analysis of the orientation of terrestrial laser scanning (TLS) data. In the proposed data processing methodology, point clouds are considered as panoramic images enriched by the depth map. Computer vision (CV) algorithms are used for orientation, which are applied for testing the correctness of the detection of tie points and time of computations, and for assessing difficulties in their implementation. The BRISK, FASRT, MSER, SIFT, SURF, ASIFT and CenSurE algorithms are used to search for key-points. The source data are point clouds acquired using a Z+F 5006h terrestrial laser scanner on the ruins of Iłża Castle, Poland. Algorithms allowing combination of the photogrammetric and CV approaches are also presented.

  17. The Event Detection and the Apparent Velocity Estimation Based on Computer Vision

    NASA Astrophysics Data System (ADS)

    Shimojo, M.

    2012-08-01

    The high spatial and time resolution data obtained by the telescopes aboard Hinode revealed the new interesting dynamics in solar atmosphere. In order to detect such events and estimate the velocity of dynamics automatically, we examined the estimation methods of the optical flow based on the OpenCV that is the computer vision library. We applied the methods to the prominence eruption observed by NoRH, and the polar X-ray jet observed by XRT. As a result, it is clear that the methods work well for solar images if the images are optimized for the methods. It indicates that the optical flow estimation methods in the OpenCV library are very useful to analyze the solar phenomena.

  18. Free LittleDog!: Towards Completely Untethered Operation of the LittleDog Quadruped

    DTIC Science & Technology

    2007-08-01

    helpful Intel Open Source Computer Vision ( OpenCV ) library [4] wherever possible rather than reimplementing many of the standard algorithms, however...correspondences between image points and world points, and feeding these to a camera calibration function, such as that provided by OpenCV , allows one to solve... OpenCV calibration function to that used for intrinsic calibration solves for Tboard→camerai . The position of the camera 37 Figure 5.3: Snapshot of

  19. Automatic tracking of red blood cells in micro channels using OpenCV

    NASA Astrophysics Data System (ADS)

    Rodrigues, Vânia; Rodrigues, Pedro J.; Pereira, Ana I.; Lima, Rui

    2013-10-01

    The present study aims to developan automatic method able to track red blood cells (RBCs) trajectories flowing through a microchannel using the Open Source Computer Vision (OpenCV). The developed method is based on optical flux calculation assisted by the maximization of the template-matching product. The experimental results show a good functional performance of this method.

  20. Python and computer vision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doak, J. E.; Prasad, Lakshman

    2002-01-01

    This paper discusses the use of Python in a computer vision (CV) project. We begin by providing background information on the specific approach to CV employed by the project. This includes a brief discussion of Constrained Delaunay Triangulation (CDT), the Chordal Axis Transform (CAT), shape feature extraction and syntactic characterization, and normalization of strings representing objects. (The terms 'object' and 'blob' are used interchangeably, both referring to an entity extracted from an image.) The rest of the paper focuses on the use of Python in three critical areas: (1) interactions with a MySQL database, (2) rapid prototyping of algorithms, andmore » (3) gluing together all components of the project including existing C and C++ modules. For (l), we provide a schema definition and discuss how the various tables interact to represent objects in the database as tree structures. (2) focuses on an algorithm to create a hierarchical representation of an object, given its string representation, and an algorithm to match unknown objects against objects in a database. And finally, (3) discusses the use of Boost Python to interact with the pre-existing C and C++ code that creates the CDTs and CATS, performs shape feature extraction and syntactic characterization, and normalizes object strings. The paper concludes with a vision of the future use of Python for the CV project.« less

  1. Metal surface corrosion grade estimation from single image

    NASA Astrophysics Data System (ADS)

    Chen, Yijun; Qi, Lin; Sun, Huyuan; Fan, Hao; Dong, Junyu

    2018-04-01

    Metal corrosion can cause many problems, how to quickly and effectively assess the grade of metal corrosion and timely remediation is a very important issue. Typically, this is done by trained surveyors at great cost. Assisting them in the inspection process by computer vision and artificial intelligence would decrease the inspection cost. In this paper, we propose a dataset of metal surface correction used for computer vision detection and present a comparison between standard computer vision techniques by using OpenCV and deep learning method for automatic metal surface corrosion grade estimation from single image on this dataset. The test has been performed by classifying images and calculating the accuracy for the two different approaches.

  2. Eyes of Things.

    PubMed

    Deniz, Oscar; Vallez, Noelia; Espinosa-Aranda, Jose L; Rico-Saavedra, Jose M; Parra-Patino, Javier; Bueno, Gloria; Moloney, David; Dehghani, Alireza; Dunne, Aubrey; Pagani, Alain; Krauss, Stephan; Reiser, Ruben; Waeny, Martin; Sorci, Matteo; Llewellynn, Tim; Fedorczak, Christian; Larmoire, Thierry; Herbst, Marco; Seirafi, Andre; Seirafi, Kasra

    2017-05-21

    Embedded systems control and monitor a great deal of our reality. While some "classic" features are intrinsically necessary, such as low power consumption, rugged operating ranges, fast response and low cost, these systems have evolved in the last few years to emphasize connectivity functions, thus contributing to the Internet of Things paradigm. A myriad of sensing/computing devices are being attached to everyday objects, each able to send and receive data and to act as a unique node in the Internet. Apart from the obvious necessity to process at least some data at the edge (to increase security and reduce power consumption and latency), a major breakthrough will arguably come when such devices are endowed with some level of autonomous "intelligence". Intelligent computing aims to solve problems for which no efficient exact algorithm can exist or for which we cannot conceive an exact algorithm. Central to such intelligence is Computer Vision (CV), i.e., extracting meaning from images and video. While not everything needs CV, visual information is the richest source of information about the real world: people, places and things. The possibilities of embedded CV are endless if we consider new applications and technologies, such as deep learning, drones, home robotics, intelligent surveillance, intelligent toys, wearable cameras, etc. This paper describes the Eyes of Things (EoT) platform, a versatile computer vision platform tackling those challenges and opportunities.

  3. Eyes of Things

    PubMed Central

    Deniz, Oscar; Vallez, Noelia; Espinosa-Aranda, Jose L.; Rico-Saavedra, Jose M.; Parra-Patino, Javier; Bueno, Gloria; Moloney, David; Dehghani, Alireza; Dunne, Aubrey; Pagani, Alain; Krauss, Stephan; Reiser, Ruben; Waeny, Martin; Sorci, Matteo; Llewellynn, Tim; Fedorczak, Christian; Larmoire, Thierry; Herbst, Marco; Seirafi, Andre; Seirafi, Kasra

    2017-01-01

    Embedded systems control and monitor a great deal of our reality. While some “classic” features are intrinsically necessary, such as low power consumption, rugged operating ranges, fast response and low cost, these systems have evolved in the last few years to emphasize connectivity functions, thus contributing to the Internet of Things paradigm. A myriad of sensing/computing devices are being attached to everyday objects, each able to send and receive data and to act as a unique node in the Internet. Apart from the obvious necessity to process at least some data at the edge (to increase security and reduce power consumption and latency), a major breakthrough will arguably come when such devices are endowed with some level of autonomous “intelligence”. Intelligent computing aims to solve problems for which no efficient exact algorithm can exist or for which we cannot conceive an exact algorithm. Central to such intelligence is Computer Vision (CV), i.e., extracting meaning from images and video. While not everything needs CV, visual information is the richest source of information about the real world: people, places and things. The possibilities of embedded CV are endless if we consider new applications and technologies, such as deep learning, drones, home robotics, intelligent surveillance, intelligent toys, wearable cameras, etc. This paper describes the Eyes of Things (EoT) platform, a versatile computer vision platform tackling those challenges and opportunities. PMID:28531141

  4. Computer Vision Tool and Technician as First Reader of Lung Cancer Screening CT Scans.

    PubMed

    Ritchie, Alexander J; Sanghera, Calvin; Jacobs, Colin; Zhang, Wei; Mayo, John; Schmidt, Heidi; Gingras, Michel; Pasian, Sergio; Stewart, Lori; Tsai, Scott; Manos, Daria; Seely, Jean M; Burrowes, Paul; Bhatia, Rick; Atkar-Khattra, Sukhinder; van Ginneken, Bram; Tammemagi, Martin; Tsao, Ming Sound; Lam, Stephen

    2016-05-01

    To implement a cost-effective low-dose computed tomography (LDCT) lung cancer screening program at the population level, accurate and efficient interpretation of a large volume of LDCT scans is needed. The objective of this study was to evaluate a workflow strategy to identify abnormal LDCT scans in which a technician assisted by computer vision (CV) software acts as a first reader with the aim to improve speed, consistency, and quality of scan interpretation. Without knowledge of the diagnosis, a technician reviewed 828 randomly batched scans (136 with lung cancers, 556 with benign nodules, and 136 without nodules) from the baseline Pan-Canadian Early Detection of Lung Cancer Study that had been annotated by the CV software CIRRUS Lung Screening (Diagnostic Image Analysis Group, Nijmegen, The Netherlands). The scans were classified as either normal (no nodules ≥1 mm or benign nodules) or abnormal (nodules or other abnormality). The results were compared with the diagnostic interpretation by Pan-Canadian Early Detection of Lung Cancer Study radiologists. The overall sensitivity and specificity of the technician in identifying an abnormal scan were 97.8% (95% confidence interval: 96.4-98.8) and 98.0% (95% confidence interval: 89.5-99.7), respectively. Of the 112 prevalent nodules that were found to be malignant in follow-up, 92.9% were correctly identified by the technician plus CV compared with 84.8% by the study radiologists. The average time taken by the technician to review a scan after CV processing was 208 ± 120 seconds. Prescreening CV software and a technician as first reader is a promising strategy for improving the consistency and quality of screening interpretation of LDCT scans. Copyright © 2016 International Association for the Study of Lung Cancer. Published by Elsevier Inc. All rights reserved.

  5. Computer-Assisted Culture Learning in an Online Augmented Reality Environment Based on Free-Hand Gesture Interaction

    ERIC Educational Resources Information Center

    Yang, Mau-Tsuen; Liao, Wan-Che

    2014-01-01

    The physical-virtual immersion and real-time interaction play an essential role in cultural and language learning. Augmented reality (AR) technology can be used to seamlessly merge virtual objects with real-world images to realize immersions. Additionally, computer vision (CV) technology can recognize free-hand gestures from live images to enable…

  6. Image Classification for Web Genre Identification

    DTIC Science & Technology

    2012-01-01

    recognition and landscape detection using the computer vision toolkit OpenCV1. For facial recognition , we researched the possibilities of using the...method for connecting these names with a face/personal photo and logo respectively. [2] METHODOLOGY For this project, we focused primarily on facial

  7. Computer vision-based automated peak picking applied to protein NMR spectra.

    PubMed

    Klukowski, Piotr; Walczak, Michal J; Gonczarek, Adam; Boudet, Julien; Wider, Gerhard

    2015-09-15

    A detailed analysis of multidimensional NMR spectra of macromolecules requires the identification of individual resonances (peaks). This task can be tedious and time-consuming and often requires support by experienced users. Automated peak picking algorithms were introduced more than 25 years ago, but there are still major deficiencies/flaws that often prevent complete and error free peak picking of biological macromolecule spectra. The major challenges of automated peak picking algorithms is both the distinction of artifacts from real peaks particularly from those with irregular shapes and also picking peaks in spectral regions with overlapping resonances which are very hard to resolve by existing computer algorithms. In both of these cases a visual inspection approach could be more effective than a 'blind' algorithm. We present a novel approach using computer vision (CV) methodology which could be better adapted to the problem of peak recognition. After suitable 'training' we successfully applied the CV algorithm to spectra of medium-sized soluble proteins up to molecular weights of 26 kDa and to a 130 kDa complex of a tetrameric membrane protein in detergent micelles. Our CV approach outperforms commonly used programs. With suitable training datasets the application of the presented method can be extended to automated peak picking in multidimensional spectra of nucleic acids or carbohydrates and adapted to solid-state NMR spectra. CV-Peak Picker is available upon request from the authors. gsw@mol.biol.ethz.ch; michal.walczak@mol.biol.ethz.ch; adam.gonczarek@pwr.edu.pl Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Coordinated Autonomy for Persistent Presence in Harbor and Riverine Environments

    DTIC Science & Technology

    2007-09-30

    estimators, and methods designed to deal with real-world problems such as video transmission noise; • OpenCV for basic computer vision functionality as...awareness and forward surveillance of Rocky’s intended path. Aerial video was transmitted to the UAV ground station, where an operator using GIS

  9. Unmanned aircraft systems image collection and computer vision image processing for surveying and mapping that meets professional needs

    NASA Astrophysics Data System (ADS)

    Peterson, James Preston, II

    Unmanned Aerial Systems (UAS) are rapidly blurring the lines between traditional and close range photogrammetry, and between surveying and photogrammetry. UAS are providing an economic platform for performing aerial surveying on small projects. The focus of this research was to describe traditional photogrammetric imagery and Light Detection and Ranging (LiDAR) geospatial products, describe close range photogrammetry (CRP), introduce UAS and computer vision (CV), and investigate whether industry mapping standards for accuracy can be met using UAS collection and CV processing. A 120-acre site was selected and 97 aerial targets were surveyed for evaluation purposes. Four UAS flights of varying heights above ground level (AGL) were executed, and three different target patterns of varying distances between targets were analyzed for compliance with American Society for Photogrammetry and Remote Sensing (ASPRS) and National Standard for Spatial Data Accuracy (NSSDA) mapping standards. This analysis resulted in twelve datasets. Error patterns were evaluated and reasons for these errors were determined. The relationship between the AGL, ground sample distance, target spacing and the root mean square error of the targets is exploited by this research to develop guidelines that use the ASPRS and NSSDA map standard as the template. These guidelines allow the user to select the desired mapping accuracy and determine what target spacing and AGL is required to produce the desired accuracy. These guidelines also address how UAS/CV phenomena affect map accuracy. General guidelines and recommendations are presented that give the user helpful information for planning a UAS flight using CV technology.

  10. Neurally and Ocularly Informed Graph-Based Models for Searching 3D Environments

    DTIC Science & Technology

    2014-06-03

    hBCI = hybrid brain–computer interface, TAG = transductive annotation by graph, CV = computer vision, TSP = traveling salesman problem . are navigated...environment that are most likely to contain objects that the subject would like to visit. 2.9. Route planning A traveling salesman problem (TSP) solver...fixations in a visual search task using fixation-related potentials J. Vis. 13 Croes G 1958 A method for solving traveling - salesman problems Oper. Res

  11. Low Cost Night Vision System for Intruder Detection

    NASA Astrophysics Data System (ADS)

    Ng, Liang S.; Yusoff, Wan Azhar Wan; R, Dhinesh; Sak, J. S.

    2016-02-01

    The growth in production of Android devices has resulted in greater functionalities as well as lower costs. This has made previously more expensive systems such as night vision affordable for more businesses and end users. We designed and implemented robust and low cost night vision systems based on red-green-blue (RGB) colour histogram for a static camera as well as a camera on an unmanned aerial vehicle (UAV), using OpenCV library on Intel compatible notebook computers, running Ubuntu Linux operating system, with less than 8GB of RAM. They were tested against human intruders under low light conditions (indoor, outdoor, night time) and were shown to have successfully detected the intruders.

  12. IJ-OpenCV: Combining ImageJ and OpenCV for processing images in biomedicine.

    PubMed

    Domínguez, César; Heras, Jónathan; Pascual, Vico

    2017-05-01

    The effective processing of biomedical images usually requires the interoperability of diverse software tools that have different aims but are complementary. The goal of this work is to develop a bridge to connect two of those tools: ImageJ, a program for image analysis in life sciences, and OpenCV, a computer vision and machine learning library. Based on a thorough analysis of ImageJ and OpenCV, we detected the features of these systems that could be enhanced, and developed a library to combine both tools, taking advantage of the strengths of each system. The library was implemented on top of the SciJava converter framework. We also provide a methodology to use this library. We have developed the publicly available library IJ-OpenCV that can be employed to create applications combining features from both ImageJ and OpenCV. From the perspective of ImageJ developers, they can use IJ-OpenCV to easily create plugins that use any functionality provided by the OpenCV library and explore different alternatives. From the perspective of OpenCV developers, this library provides a link to the ImageJ graphical user interface and all its features to handle regions of interest. The IJ-OpenCV library bridges the gap between ImageJ and OpenCV, allowing the connection and the cooperation of these two systems. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. An investigation into multi-dimensional prediction models to estimate the pose error of a quadcopter in a CSP plant setting

    NASA Astrophysics Data System (ADS)

    Lock, Jacobus C.; Smit, Willie J.; Treurnicht, Johann

    2016-05-01

    The Solar Thermal Energy Research Group (STERG) is investigating ways to make heliostats cheaper to reduce the total cost of a concentrating solar power (CSP) plant. One avenue of research is to use unmanned aerial vehicles (UAVs) to automate and assist with the heliostat calibration process. To do this, the pose estimation error of each UAV must be determined and integrated into a calibration procedure. A computer vision (CV) system is used to measure the pose of a quadcopter UAV. However, this CV system contains considerable measurement errors. Since this is a high-dimensional problem, a sophisticated prediction model must be used to estimate the measurement error of the CV system for any given pose measurement vector. This paper attempts to train and validate such a model with the aim of using it to determine the pose error of a quadcopter in a CSP plant setting.

  14. GPU-based real-time trinocular stereo vision

    NASA Astrophysics Data System (ADS)

    Yao, Yuanbin; Linton, R. J.; Padir, Taskin

    2013-01-01

    Most stereovision applications are binocular which uses information from a 2-camera array to perform stereo matching and compute the depth image. Trinocular stereovision with a 3-camera array has been proved to provide higher accuracy in stereo matching which could benefit applications like distance finding, object recognition, and detection. This paper presents a real-time stereovision algorithm implemented on a GPGPU (General-purpose graphics processing unit) using a trinocular stereovision camera array. Algorithm employs a winner-take-all method applied to perform fusion of disparities in different directions following various image processing techniques to obtain the depth information. The goal of the algorithm is to achieve real-time processing speed with the help of a GPGPU involving the use of Open Source Computer Vision Library (OpenCV) in C++ and NVidia CUDA GPGPU Solution. The results are compared in accuracy and speed to verify the improvement.

  15. Exploration of available feature detection and identification systems and their performance on radiographs

    NASA Astrophysics Data System (ADS)

    Wantuch, Andrew C.; Vita, Joshua A.; Jimenez, Edward S.; Bray, Iliana E.

    2016-10-01

    Despite object detection, recognition, and identification being very active areas of computer vision research, many of the available tools to aid in these processes are designed with only photographs in mind. Although some algorithms used specifically for feature detection and identification may not take explicit advantage of the colors available in the image, they still under-perform on radiographs, which are grayscale images. We are especially interested in the robustness of these algorithms, specifically their performance on a preexisting database of X-ray radiographs in compressed JPEG form, with multiple ways of describing pixel information. We will review various aspects of the performance of available feature detection and identification systems, including MATLABs Computer Vision toolbox, VLFeat, and OpenCV on our non-ideal database. In the process, we will explore possible reasons for the algorithms' lessened ability to detect and identify features from the X-ray radiographs.

  16. Analysis of Brown camera distortion model

    NASA Astrophysics Data System (ADS)

    Nowakowski, Artur; Skarbek, Władysław

    2013-10-01

    Contemporary image acquisition devices introduce optical distortion into image. It results in pixel displacement and therefore needs to be compensated for many computer vision applications. The distortion is usually modeled by the Brown distortion model, which parameters can be included in camera calibration task. In this paper we describe original model, its dependencies and analyze orthogonality with regard to radius for its decentering distortion component. We also report experiments with camera calibration algorithm included in OpenCV library, especially a stability of distortion parameters estimation is evaluated.

  17. Colour based fire detection method with temporal intensity variation filtration

    NASA Astrophysics Data System (ADS)

    Trambitckii, K.; Anding, K.; Musalimov, V.; Linß, G.

    2015-02-01

    Development of video, computing technologies and computer vision gives a possibility of automatic fire detection on video information. Under that project different algorithms was implemented to find more efficient way of fire detection. In that article colour based fire detection algorithm is described. But it is not enough to use only colour information to detect fire properly. The main reason of this is that in the shooting conditions may be a lot of things having colour similar to fire. A temporary intensity variation of pixels is used to separate them from the fire. These variations are averaged over the series of several frames. This algorithm shows robust work and was realised as a computer program by using of the OpenCV library.

  18. A Performance Comparison of Color Vision Tests for Military Screening.

    PubMed

    Walsh, David V; Robinson, James; Jurek, Gina M; Capó-Aponte, José E; Riggs, Daniel W; Temme, Leonard A

    2016-04-01

    Current color vision (CV) tests used for aviation screening in the U.S. Army only provide pass-fail results, and previous studies have shown variable sensitivity and specificity. The purpose of this study was to evaluate seven CV tests to determine an optimal CV test screener that potentially could be implemented by the U.S. Army. There were 133 subjects [65 Color Vision Deficits (CVD), 68 Color Vision Normal (CVN)] who performed all of the tests in one setting. CVD and CVN determination was initially assessed with the Oculus anomaloscope. Each test was administered monocularly and according to the test protocol. The main outcome measures were test sensitivity, specificity, and administration time (automated tests). Three of the four Pseudoisochromatic Plate (PIP) tests had a sensitivity/specificity > 0.90 OD/OS, whereas the FALANT tests had a sensitivity/specificity > 0.80 OD/OS. The Cone Contrast Test (CCT) demonstrated sensitivity/specificity > 0.90 OD/OS, whereas the Color Assessment and Diagnosis (CAD) test demonstrated sensitivity/specificity > 0.85 OD/OS. Comparison with the anomaloscope ("gold standard") revealed no significant difference of sensitivity and specificity OD/OS with the CCT, Dvorine PIP, and PIPC tests. Finally, the CCT administration time was significantly faster than the CAD test. The current U.S. Army CV screening tests demonstrated good sensitivity and specificity, as did the automated tests. In addition, some current PIP tests (Dvorine, PIPC), and the CCT performed no worse statistically than the anomaloscope with regard to sensitivity/specificity. The CCT letter presentation is randomized and results would not be confounded by potential memorization, or fading, of book plates.

  19. Processor core for real time background identification of HD video based on OpenCV Gaussian mixture model algorithm

    NASA Astrophysics Data System (ADS)

    Genovese, Mariangela; Napoli, Ettore

    2013-05-01

    The identification of moving objects is a fundamental step in computer vision processing chains. The development of low cost and lightweight smart cameras steadily increases the request of efficient and high performance circuits able to process high definition video in real time. The paper proposes two processor cores aimed to perform the real time background identification on High Definition (HD, 1920 1080 pixel) video streams. The implemented algorithm is the OpenCV version of the Gaussian Mixture Model (GMM), an high performance probabilistic algorithm for the segmentation of the background that is however computationally intensive and impossible to implement on general purpose CPU with the constraint of real time processing. In the proposed paper, the equations of the OpenCV GMM algorithm are optimized in such a way that a lightweight and low power implementation of the algorithm is obtained. The reported performances are also the result of the use of state of the art truncated binary multipliers and ROM compression techniques for the implementation of the non-linear functions. The first circuit has commercial FPGA devices as a target and provides speed and logic resource occupation that overcome previously proposed implementations. The second circuit is oriented to an ASIC (UMC-90nm) standard cell implementation. Both implementations are able to process more than 60 frames per second in 1080p format, a frame rate compatible with HD television.

  20. Defect Detection in Superconducting Radiofrequency Cavity Surface Using C + + and OpenCV

    NASA Astrophysics Data System (ADS)

    Oswald, Samantha; Thomas Jefferson National Accelerator Facility Collaboration

    2014-03-01

    Thomas Jefferson National Accelerator Facility (TJNAF) uses superconducting radiofrequency (SRF) cavities to accelerate an electron beam. If theses cavities have a small particle or defect, it can degrade the performance of the cavity. The problem at hand is inspecting the cavity for defects, little bubbles of niobium on the surface of the cavity. Thousands of pictures have to be taken of a single cavity and then looked through to see how many defects were found. A C + + program with Open Source Computer Vision (OpenCV) was constructed to reduce the number of hours searching through the images and finds all the defects. Using this code, the SRF group is now able to use the code to identify defects in on-going tests of SRF cavities. Real time detection is the next step so that instead of taking pictures when looking at the cavity, the camera will detect all the defects.

  1. Automated and connected vehicle (AV/CV) test bed to improve transit, bicycle, and pedestrian safety : concept of operations plan.

    DOT National Transportation Integrated Search

    2017-02-01

    This document presents the Concept of Operations (ConOps) Plan for the Automated and Connected Vehicle (AV/CV) Test Bed to Improve Transit, Bicycle, and Pedestrian Safety. As illustrated in Figure 1, the plan presents the overarching vision and goals...

  2. Target tracking and surveillance by fusing stereo and RFID information

    NASA Astrophysics Data System (ADS)

    Raza, Rana H.; Stockman, George C.

    2012-06-01

    Ensuring security in high risk areas such as an airport is an important but complex problem. Effectively tracking personnel, containers, and machines is a crucial task. Moreover, security and safety require understanding the interaction of persons and objects. Computer vision (CV) has been a classic tool; however, variable lighting, imaging, and random occlusions present difficulties for real-time surveillance, resulting in erroneous object detection and trajectories. Determining object ID via CV at any instance of time in a crowded area is computationally prohibitive, yet the trajectories of personnel and objects should be known in real time. Radio Frequency Identification (RFID) can be used to reliably identify target objects and can even locate targets at coarse spatial resolution, while CV provides fuzzy features for target ID at finer resolution. Our research demonstrates benefits obtained when most objects are "cooperative" by being RFID tagged. Fusion provides a method to simplify the correspondence problem in 3D space. A surveillance system can query for unique object ID as well as tag ID information, such as target height, texture, shape and color, which can greatly enhance scene analysis. We extend geometry-based tracking so that intermittent information on ID and location can be used in determining a set of trajectories of N targets over T time steps. We show that partial-targetinformation obtained through RFID can reduce computation time (by 99.9% in some cases) and also increase the likelihood of producing correct trajectories. We conclude that real-time decision-making should be possible if the surveillance system can integrate information effectively between the sensor level and activity understanding level.

  3. A New Parallel Approach for Accelerating the GPU-Based Execution of Edge Detection Algorithms

    PubMed Central

    Emrani, Zahra; Bateni, Soroosh; Rabbani, Hossein

    2017-01-01

    Real-time image processing is used in a wide variety of applications like those in medical care and industrial processes. This technique in medical care has the ability to display important patient information graphi graphically, which can supplement and help the treatment process. Medical decisions made based on real-time images are more accurate and reliable. According to the recent researches, graphic processing unit (GPU) programming is a useful method for improving the speed and quality of medical image processing and is one of the ways of real-time image processing. Edge detection is an early stage in most of the image processing methods for the extraction of features and object segments from a raw image. The Canny method, Sobel and Prewitt filters, and the Roberts’ Cross technique are some examples of edge detection algorithms that are widely used in image processing and machine vision. In this work, these algorithms are implemented using the Compute Unified Device Architecture (CUDA), Open Source Computer Vision (OpenCV), and Matrix Laboratory (MATLAB) platforms. An existing parallel method for Canny approach has been modified further to run in a fully parallel manner. This has been achieved by replacing the breadth- first search procedure with a parallel method. These algorithms have been compared by testing them on a database of optical coherence tomography images. The comparison of results shows that the proposed implementation of the Canny method on GPU using the CUDA platform improves the speed of execution by 2–100× compared to the central processing unit-based implementation using the OpenCV and MATLAB platforms. PMID:28487831

  4. A New Parallel Approach for Accelerating the GPU-Based Execution of Edge Detection Algorithms.

    PubMed

    Emrani, Zahra; Bateni, Soroosh; Rabbani, Hossein

    2017-01-01

    Real-time image processing is used in a wide variety of applications like those in medical care and industrial processes. This technique in medical care has the ability to display important patient information graphi graphically, which can supplement and help the treatment process. Medical decisions made based on real-time images are more accurate and reliable. According to the recent researches, graphic processing unit (GPU) programming is a useful method for improving the speed and quality of medical image processing and is one of the ways of real-time image processing. Edge detection is an early stage in most of the image processing methods for the extraction of features and object segments from a raw image. The Canny method, Sobel and Prewitt filters, and the Roberts' Cross technique are some examples of edge detection algorithms that are widely used in image processing and machine vision. In this work, these algorithms are implemented using the Compute Unified Device Architecture (CUDA), Open Source Computer Vision (OpenCV), and Matrix Laboratory (MATLAB) platforms. An existing parallel method for Canny approach has been modified further to run in a fully parallel manner. This has been achieved by replacing the breadth- first search procedure with a parallel method. These algorithms have been compared by testing them on a database of optical coherence tomography images. The comparison of results shows that the proposed implementation of the Canny method on GPU using the CUDA platform improves the speed of execution by 2-100× compared to the central processing unit-based implementation using the OpenCV and MATLAB platforms.

  5. Integrating Symbolic and Statistical Methods for Testing Intelligent Systems Applications to Machine Learning and Computer Vision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jha, Sumit Kumar; Pullum, Laura L; Ramanathan, Arvind

    Embedded intelligent systems ranging from tiny im- plantable biomedical devices to large swarms of autonomous un- manned aerial systems are becoming pervasive in our daily lives. While we depend on the flawless functioning of such intelligent systems, and often take their behavioral correctness and safety for granted, it is notoriously difficult to generate test cases that expose subtle errors in the implementations of machine learning algorithms. Hence, the validation of intelligent systems is usually achieved by studying their behavior on representative data sets, using methods such as cross-validation and bootstrapping.In this paper, we present a new testing methodology for studyingmore » the correctness of intelligent systems. Our approach uses symbolic decision procedures coupled with statistical hypothesis testing to. We also use our algorithm to analyze the robustness of a human detection algorithm built using the OpenCV open-source computer vision library. We show that the human detection implementation can fail to detect humans in perturbed video frames even when the perturbations are so small that the corresponding frames look identical to the naked eye.« less

  6. A comparison of semiglobal and local dense matching algorithms for surface reconstruction

    NASA Astrophysics Data System (ADS)

    Dall'Asta, E.; Roncella, R.

    2014-06-01

    Encouraged by the growing interest in automatic 3D image-based reconstruction, the development and improvement of robust stereo matching techniques is one of the most investigated research topic of the last years in photogrammetry and computer vision. The paper is focused on the comparison of some stereo matching algorithms (local and global) which are very popular both in photogrammetry and computer vision. In particular, the Semi-Global Matching (SGM), which realizes a pixel-wise matching and relies on the application of consistency constraints during the matching cost aggregation, will be discussed. The results of some tests performed on real and simulated stereo image datasets, evaluating in particular the accuracy of the obtained digital surface models, will be presented. Several algorithms and different implementation are considered in the comparison, using freeware software codes like MICMAC and OpenCV, commercial software (e.g. Agisoft PhotoScan) and proprietary codes implementing Least Square e Semi-Global Matching algorithms. The comparisons will also consider the completeness and the level of detail within fine structures, and the reliability and repeatability of the obtainable data.

  7. In Vivo Fluorescence Imaging and Tracking of Circulating Cells and Therapeutic Nanoparticles

    NASA Astrophysics Data System (ADS)

    Markovic, Stacey

    Noninvasive enumeration of rare circulating cells in small animals is of great importance in many areas of biomedical research, but most existing enumeration techniques involve drawing and enriching blood which is known to be problematic. Recently, small animal "in vivo flow cytometry" (IVFC) techniques have been developed, where cells flowing through small arterioles are counted continuously and noninvasively in vivo. However, higher sensitivity IVFC techniques are needed for studying low-abundance (<100/mL) circulating cells. To this end, we developed a macroscopic fluorescence imaging system and automated computer vision algorithm that allows in vivo detection, enumeration and tracking of circulating fluorescently labeled cells from multiple large blood vessels in the ear of a mouse. This technique ---"computer vision IVFC" (CV-IVFC) --- allows cell detection and enumeration at concentrations of 20 cells/mL. Performance of CV-IVFC was also characterized for low-contrast imaging scenarios, representing conditions of weak cell fluorescent labeling or high background tissue autofluorescence, and showed efficient tracking and enumeration of circulating cells with 50% sensitivity in contrast conditions degraded 2 orders of magnitude compared to in vivo testing supporting the potential utility of CV-IVFC in a range of biological models. Refinement of prior work in our lab of a separate rare-cell detection platform - "diffuse fluorescence flow cytometry" (DFFC) --- implemented a "frequency encoding" scheme by modulating two excitation lasers. Fluorescent light from both lasers can be simultaneously detected and split by frequency allowing for better discrimination of noise, sensitivity, and cell localization. The system design is described in detail and preliminary data is shown. Last, we developed a broad-field transmission fluorescence imaging system to observe nanoparticle (NP) diffusion in bulk biological tissue. Novel, implantable NP spacers allow controlled, long-term release of drugs. However, kinetics of NP (drug) diffusion over time is still poorly understood. Our imaging system allowed us to quantify diffusion of free dye and NPs of different sizes in vitro and in vivo. Subsequent analysis verified that there was continuous diffusion which could be controlled based on particle size. Continued use of this imaging system will aid optimization of NP spacers.

  8. Colour vision requirements in visually demanding occupations.

    PubMed

    Barbur, J L; Rodriguez-Carmona, M

    2017-06-01

    Normal trichromatic colour vision (CV) is often required as a condition for employment in visually demanding occupations. If this requirement could be enforced using current, colour assessment tests, a significant percentage of subjects with anomalous, congenital trichromacy who can perform the suprathreshold, colour-related tasks encountered in many occupations with the same accuracy as normal trichromats would fail. These applicants would therefore be discriminated against unfairly. One solution to this problem is to produce minimum, justifiable CV requirements that are specific to each occupation. This has been done successfully for commercial aviation (i.e. the flight crew) and for Transport for London train drivers. An alternative approach is to make use of new findings and the statistical outcomes of past practices to produce graded, justifiable CV categories that can be enforced. To achieve this aim, we analysed colour assessment outcomes and quantified severity of CV loss in 1363 subjects. The severity of CV loss was measured in each subject and statistical, pass/fail outcomes established for each of the most commonly used, conventional colour assessment tests and protocols. This evidence and new findings that relate severity of loss to the effective use of colour signals in a number of tasks provide the basis for a new colour grading system based on six categories. A single colour assessment test is needed to establish the applicant's CV category which can range from 'supernormal', for the most stringent, colour-demanding tasks, to 'severe colour deficiency', when red/green CV is either absent or extremely weak. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. Color vision and neuroretinal function in diabetes.

    PubMed

    Wolff, B E; Bearse, M A; Schneck, M E; Dhamdhere, K; Harrison, W W; Barez, S; Adams, A J

    2015-04-01

    We investigate how type 2 diabetes (T2DM) and diabetic retinopathy (DR) affect color vision (CV) and mfERG implicit time (IT), whether CV and IT are correlated, and whether CV and IT abnormality classifications agree. Adams desaturated D-15 color test, mfERG, and fundus photographs were examined in 37 controls, 22 T2DM patients without DR (NoRet group), and 25 T2DM patients with DR (Ret group). Color confusion score (CCS) was calculated. ITs were averaged within the central 7 hexagons (central IT; ≤4.5°) and outside this area (peripheral IT; ≥4.5°). DR was within (DRIN) or outside (DROUT) of the central 7 hexagons. Group differences, percentages of abnormalities, correlations, and agreement were determined. CCS was greater in the NoRet (P = 0.002) and Ret (P < 0.0001) groups than in control group. CCS was abnormal in 3, 41, and 48 % of eyes in the control, NoRet, and Ret groups, respectively. Ret group CV abnormalities were more frequent in DRIN than in DROUT subgroups (71 vs. 18 %, respectively; P < 0.0001). CCS and IT were correlated only in the Ret group, in both retinal zones (P ≤ 0.028). Only in the Ret group did CCS and peripheral IT abnormality classifications agree (72 %; P < 0.05). CV is affected in patients with T2DM, even without DR. Central DR increases the likelihood of a CV deficit compared with non-central DR. mfERG IT averaged across central or peripheral retinal locations is less frequently abnormal than CV in the absence of DR, and these two measures are correlated only when DR is present.

  10. Color vision and neuroretinal function in diabetes

    PubMed Central

    Bearse, M. A.; Schneck, M. E.; Dhamdhere, K.; Harrison, W. W.; Barez, S.; Adams, A. J.

    2015-01-01

    Purpose We investigate how type 2 diabetes (T2DM) and diabetic retinopathy (DR) affect color vision (CV) and mfERG implicit time (IT), whether CV and IT are correlated, and whether CV and IT abnormality classifications agree. Methods Adams desaturated D-15 color test, mfERG, and fundus photographs were examined in 37 controls, 22 T2DM patients without DR (NoRet group), and 25 T2DM patients with DR (Ret group). Color confusion score (CCS) was calculated. ITs were averaged within the central 7 hexagons (central IT; ≥4.5°) and outside this area (peripheral IT; ≤4.5°). DR was within (DRIN) or outside (DROUT) of the central 7 hexagons. Group differences, percentages of abnormalities, correlations, and agreement were determined. Results CCS was greater in the NoRet (P = 0.002) and Ret (P < 0.0001) groups than in control group. CCS was abnormal in 3, 41, and 48 % of eyes in the control, NoRet, and Ret groups, respectively. Ret group CV abnormalities were more frequent in DRIN than in DROUT subgroups (71 vs. 18 %, respectively; P < 0.0001). CCS and IT were correlated only in the Ret group, in both retinal zones (P ≥ 0.028). Only in the Ret group did CCS and peripheral IT abnormality classifications agree (72 %; P < 0.05). Conclusion CV is affected in patients with T2DM, even without DR. Central DR increases the likelihood of a CV deficit compared with non-central DR. mfERG IT averaged across central or peripheral retinal locations is less frequently abnormal than CV in the absence of DR, and these two measures are correlated only when DR is present. PMID:25516428

  11. 3D documenatation of the petalaindera: digital heritage preservation methods using 3D laser scanner and photogrammetry

    NASA Astrophysics Data System (ADS)

    Sharif, Harlina Md; Hazumi, Hazman; Hafizuddin Meli, Rafiq

    2018-01-01

    3D imaging technologies have undergone massive revolution in recent years. Despite this rapid development, documentation of 3D cultural assets in Malaysia is still very much reliant upon conventional techniques such as measured drawings and manual photogrammetry. There is very little progress towards exploring new methods or advanced technologies to convert 3D cultural assets into 3D visual representation and visualization models that are easily accessible for information sharing. In recent years, however, the advent of computer vision (CV) algorithms make it possible to reconstruct 3D geometry of objects by using image sequences from digital cameras, which are then processed by web services and freeware applications. This paper presents a completed stage of an exploratory study that investigates the potentials of using CV automated image-based open-source software and web services to reconstruct and replicate cultural assets. By selecting an intricate wooden boat, Petalaindera, this study attempts to evaluate the efficiency of CV systems and compare it with the application of 3D laser scanning, which is known for its accuracy, efficiency and high cost. The final aim of this study is to compare the visual accuracy of 3D models generated by CV system, and 3D models produced by 3D scanning and manual photogrammetry for an intricate subject such as the Petalaindera. The final objective is to explore cost-effective methods that could provide fundamental guidelines on the best practice approach for digital heritage in Malaysia.

  12. Scalable Nearest Neighbor Algorithms for High Dimensional Data.

    PubMed

    Muja, Marius; Lowe, David G

    2014-11-01

    For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching.

  13. A Versatile Phenotyping System and Analytics Platform Reveals Diverse Temporal Responses to Water Availability in Setaria.

    PubMed

    Fahlgren, Noah; Feldman, Maximilian; Gehan, Malia A; Wilson, Melinda S; Shyu, Christine; Bryant, Douglas W; Hill, Steven T; McEntee, Colton J; Warnasooriya, Sankalpi N; Kumar, Indrajit; Ficor, Tracy; Turnipseed, Stephanie; Gilbert, Kerrigan B; Brutnell, Thomas P; Carrington, James C; Mockler, Todd C; Baxter, Ivan

    2015-10-05

    Phenotyping has become the rate-limiting step in using large-scale genomic data to understand and improve agricultural crops. Here, the Bellwether Phenotyping Platform for controlled-environment plant growth and automated multimodal phenotyping is described. The system has capacity for 1140 plants, which pass daily through stations to record fluorescence, near-infrared, and visible images. Plant Computer Vision (PlantCV) was developed as open-source, hardware platform-independent software for quantitative image analysis. In a 4-week experiment, wild Setaria viridis and domesticated Setaria italica had fundamentally different temporal responses to water availability. While both lines produced similar levels of biomass under limited water conditions, Setaria viridis maintained the same water-use efficiency under water replete conditions, while Setaria italica shifted to less efficient growth. Overall, the Bellwether Phenotyping Platform and PlantCV software detected significant effects of genotype and environment on height, biomass, water-use efficiency, color, plant architecture, and tissue water status traits. All ∼ 79,000 images acquired during the course of the experiment are publicly available. Copyright © 2015 The Author. Published by Elsevier Inc. All rights reserved.

  14. A computer vision framework for finger-tapping evaluation in Parkinson's disease.

    PubMed

    Khan, Taha; Nyholm, Dag; Westin, Jerker; Dougherty, Mark

    2014-01-01

    The rapid finger-tapping test (RFT) is an important method for clinical evaluation of movement disorders, including Parkinson's disease (PD). In clinical practice, the naked-eye evaluation of RFT results in a coarse judgment of symptom scores. We introduce a novel computer-vision (CV) method for quantification of tapping symptoms through motion analysis of index-fingers. The method is unique as it utilizes facial features to calibrate tapping amplitude for normalization of distance variation between the camera and subject. The study involved 387 video footages of RFT recorded from 13 patients diagnosed with advanced PD. Tapping performance in these videos was rated by two clinicians between the symptom severity levels ('0: normal' to '3: severe') using the unified Parkinson's disease rating scale motor examination of finger-tapping (UPDRS-FT). Another set of recordings in this study consisted of 84 videos of RFT recorded from 6 healthy controls. These videos were processed by a CV algorithm that tracks the index-finger motion between the video-frames to produce a tapping time-series. Different features were computed from this time series to estimate speed, amplitude, rhythm and fatigue in tapping. The features were trained in a support vector machine (1) to categorize the patient group between UPDRS-FT symptom severity levels, and (2) to discriminate between PD patients and healthy controls. A new representative feature of tapping rhythm, 'cross-correlation between the normalized peaks' showed strong Guttman correlation (μ2=-0.80) with the clinical ratings. The classification of tapping features using the support vector machine classifier and 10-fold cross validation categorized the patient samples between UPDRS-FT levels with an accuracy of 88%. The same classification scheme discriminated between RFT samples of healthy controls and PD patients with an accuracy of 95%. The work supports the feasibility of the approach, which is presumed suitable for PD monitoring in the home environment. The system offers advantages over other technologies (e.g. magnetic sensors, accelerometers, etc.) previously developed for objective assessment of tapping symptoms. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Visually induced self-motion sensation adapts rapidly to left-right reversal of vision

    NASA Technical Reports Server (NTRS)

    Oman, C. M.; Bock, O. L.

    1981-01-01

    Three experiments were conducted using 15 adult volunteers with no overt oculomotor or vestibular disorders. In all experiments, left-right vision reversal was achieved using prism goggles, which permitted a binocular field of vision subtending approximately 45 deg horizontally and 28 deg vertically. In all experiments, circularvection (CV) was tested before and immediately after a period of exposure to reversed vision. After one to three hours of active movement while wearing vision-reversing goggles, 10 of 15 (stationary) human subjects viewing a moving stripe display experienced a self-rotation illusion in the same direction as seen stripe motion, rather than in the opposite (normal) direction, demonstrating that the central neural pathways that process visual self-rotation cues can undergo rapid adaptive modification.

  16. Technical Note: A respiratory monitoring and processing system based on computer vision: prototype and proof of principle.

    PubMed

    Leduc, Nicolas; Atallah, Vincent; Escarmant, Patrick; Vinh-Hung, Vincent

    2016-09-08

    Monitoring and controlling respiratory motion is a challenge for the accuracy and safety of therapeutic irradiation of thoracic tumors. Various commercial systems based on the monitoring of internal or external surrogates have been developed but remain costly. In this article we describe and validate Madibreast, an in-house-made respiratory monitoring and processing device based on optical tracking of external markers. We designed an optical apparatus to ensure real-time submillimetric image resolution at 4 m. Using OpenCv libraries, we optically tracked high-contrast markers set on patients' breasts. Validation of spatial and time accuracy was performed on a mechanical phantom and on human breast. Madibreast was able to track motion of markers up to a 5 cm/s speed, at a frame rate of 30 fps, with submillimetric accuracy on mechanical phantom and human breasts. Latency was below 100 ms. Concomitant monitoring of three different locations on the breast showed discrepancies in axial motion up to 4 mm for deep-breathing patterns. This low-cost, computer-vision system for real-time motion monitoring of the irradiation of breast cancer patients showed submillimetric accuracy and acceptable latency. It allowed the authors to highlight differences in surface motion that may be correlated to tumor motion.v. © 2016 The Authors.

  17. Optimization of image processing algorithms on mobile platforms

    NASA Astrophysics Data System (ADS)

    Poudel, Pramod; Shirvaikar, Mukul

    2011-03-01

    This work presents a technique to optimize popular image processing algorithms on mobile platforms such as cell phones, net-books and personal digital assistants (PDAs). The increasing demand for video applications like context-aware computing on mobile embedded systems requires the use of computationally intensive image processing algorithms. The system engineer has a mandate to optimize them so as to meet real-time deadlines. A methodology to take advantage of the asymmetric dual-core processor, which includes an ARM and a DSP core supported by shared memory, is presented with implementation details. The target platform chosen is the popular OMAP 3530 processor for embedded media systems. It has an asymmetric dual-core architecture with an ARM Cortex-A8 and a TMS320C64x Digital Signal Processor (DSP). The development platform was the BeagleBoard with 256 MB of NAND RAM and 256 MB SDRAM memory. The basic image correlation algorithm is chosen for benchmarking as it finds widespread application for various template matching tasks such as face-recognition. The basic algorithm prototypes conform to OpenCV, a popular computer vision library. OpenCV algorithms can be easily ported to the ARM core which runs a popular operating system such as Linux or Windows CE. However, the DSP is architecturally more efficient at handling DFT algorithms. The algorithms are tested on a variety of images and performance results are presented measuring the speedup obtained due to dual-core implementation. A major advantage of this approach is that it allows the ARM processor to perform important real-time tasks, while the DSP addresses performance-hungry algorithms.

  18. A computer vision based candidate for functional balance test.

    PubMed

    Nalci, Alican; Khodamoradi, Alireza; Balkan, Ozgur; Nahab, Fatta; Garudadri, Harinath

    2015-08-01

    Balance in humans is a motor skill based on complex multimodal sensing, processing and control. Ability to maintain balance in activities of daily living (ADL) is compromised due to aging, diseases, injuries and environmental factors. Center for Disease Control and Prevention (CDC) estimate of the costs of falls among older adults was $34 billion in 2013 and is expected to reach $54.9 billion in 2020. In this paper, we present a brief review of balance impairments followed by subjective and objective tools currently used in clinical settings for human balance assessment. We propose a novel computer vision (CV) based approach as a candidate for functional balance test. The test will take less than a minute to administer and expected to be objective, repeatable and highly discriminative in quantifying ability to maintain posture and balance. We present an informal study with preliminary data from 10 healthy volunteers, and compare performance with a balance assessment system called BTrackS Balance Assessment Board. Our results show high degree of correlation with BTrackS. The proposed system promises to be a good candidate for objective functional balance tests and warrants further investigations to assess validity in clinical settings, including acute care, long term care and assisted living care facilities. Our long term goals include non-intrusive approaches to assess balance competence during ADL in independent living environments.

  19. Utilizing Commercial Hardware and Open Source Computer Vision Software to Perform Motion Capture for Reduced Gravity Flight

    NASA Technical Reports Server (NTRS)

    Humphreys, Brad; Bellisario, Brian; Gallo, Christopher; Thompson, William K.; Lewandowski, Beth

    2016-01-01

    Long duration space travel to Mars or to an asteroid will expose astronauts to extended periods of reduced gravity. Since gravity is not present to aid loading, astronauts will use resistive and aerobic exercise regimes for the duration of the space flight to minimize the loss of bone density, muscle mass and aerobic capacity that occurs during exposure to a reduced gravity environment. Unlike the International Space Station (ISS), the area available for an exercise device in the next generation of spacecraft is limited. Therefore, compact resistance exercise device prototypes are being developed. The NASA Digital Astronaut Project (DAP) is supporting the Advanced Exercise Concepts (AEC) Project, Exercise Physiology and Countermeasures (ExPC) project and the National Space Biomedical Research Institute (NSBRI) funded researchers by developing computational models of exercising with these new advanced exercise device concepts. To perform validation of these models and to support the Advanced Exercise Concepts Project, several candidate devices have been flown onboard NASAs Reduced Gravity Aircraft. In terrestrial laboratories, researchers typically have available to them motion capture systems for the measurement of subject kinematics. Onboard the parabolic flight aircraft it is not practical to utilize the traditional motion capture systems due to the large working volume they require and their relatively high replacement cost if damaged. To support measuring kinematics on board parabolic aircraft, a motion capture system is being developed utilizing open source computer vision code with commercial off the shelf (COTS) video camera hardware. While the systems accuracy is lower than lab setups, it provides a means to produce quantitative comparison motion capture kinematic data. Additionally, data such as required exercise volume for small spaces such as the Orion capsule can be determined. METHODS: OpenCV is an open source computer vision library that provides the ability to perform multi-camera 3 dimensional reconstruction. Utilizing OpenCV, via the Python programming language, a set of tools has been developed to perform motion capture in confined spaces using commercial cameras. Four Sony Video Cameras were intrinsically calibrated prior to flight. Intrinsic calibration provides a set of camera specific parameters to remove geometric distortion of the lens and sensor (specific to each individual camera). A set of high contrast markers were placed on the exercising subject (safety also necessitated that they be soft in case they become detached during parabolic flight); small yarn balls were used. Extrinsic calibration, the determination of camera location and orientation parameters, is performed using fixed landmark markers shared by the camera scenes. Additionally a wand calibration, the sweeping of the camera scenes simultaneously, was also performed. Techniques have been developed to perform intrinsic calibration, extrinsic calibration, isolation of the markers in the scene, calculation of marker 2D centroids, and 3D reconstruction from multiple cameras. These methods have been tested in the laboratory side-by-side comparison to a traditional motion capture system and also on a parabolic flight.

  20. A cognitive approach to vision for a mobile robot

    NASA Astrophysics Data System (ADS)

    Benjamin, D. Paul; Funk, Christopher; Lyons, Damian

    2013-05-01

    We describe a cognitive vision system for a mobile robot. This system works in a manner similar to the human vision system, using saccadic, vergence and pursuit movements to extract information from visual input. At each fixation, the system builds a 3D model of a small region, combining information about distance, shape, texture and motion. These 3D models are embedded within an overall 3D model of the robot's environment. This approach turns the computer vision problem into a search problem, with the goal of constructing a physically realistic model of the entire environment. At each step, the vision system selects a point in the visual input to focus on. The distance, shape, texture and motion information are computed in a small region and used to build a mesh in a 3D virtual world. Background knowledge is used to extend this structure as appropriate, e.g. if a patch of wall is seen, it is hypothesized to be part of a large wall and the entire wall is created in the virtual world, or if part of an object is recognized, the whole object's mesh is retrieved from the library of objects and placed into the virtual world. The difference between the input from the real camera and from the virtual camera is compared using local Gaussians, creating an error mask that indicates the main differences between them. This is then used to select the next points to focus on. This approach permits us to use very expensive algorithms on small localities, thus generating very accurate models. It also is task-oriented, permitting the robot to use its knowledge about its task and goals to decide which parts of the environment need to be examined. The software components of this architecture include PhysX for the 3D virtual world, OpenCV and the Point Cloud Library for visual processing, and the Soar cognitive architecture, which controls the perceptual processing and robot planning. The hardware is a custom-built pan-tilt stereo color camera. We describe experiments using both static and moving objects.

  1. The use of peripheral vision to guide perturbation-evoked reach-to-grasp balance-recovery reactions

    PubMed Central

    King, Emily C.; McKay, Sandra M.; Cheng, Kenneth C.

    2016-01-01

    For a reach-to-grasp reaction to prevent a fall, it must be executed very rapidly, but with sufficient accuracy to achieve a functional grip. Recent findings suggest that the CNS may avoid potential time delays associated with saccade-guided arm movements by instead relying on peripheral vision (PV). However, studies of volitional arm movements have shown that reaching is slower and/or less accurate when guided by PV, rather than central vision (CV). The present study investigated how the CNS resolves speed-accuracy trade-offs when forced to use PV to guide perturbation-evoked reach-to-grasp balance-recovery reactions. These reactions were evoked, in 12 healthy young adults, via sudden unpredictable anteroposterior platform translation (barriers deterred stepping reactions). In PV trials, subjects were required to look straight-ahead at a visual target while a small cylindrical handhold (length 25%> hand-width) moved intermittently and unpredictably along a transverse axis before stopping at a visual angle of 20°, 30°, or 40°. The perturbation was then delivered after a random delay. In CV trials, subjects fixated on the handhold throughout the trial. A concurrent visuo-cognitive task was performed in 50% of PV trials but had little impact on reach-to-grasp timing or accuracy. Forced reliance on PV did not significantly affect response initiation times, but did lead to longer movement times, longer time-after-peak-velocity and less direct trajectories (compared to CV trials) at the larger visual angles. Despite these effects, forced reliance on PV did not compromise ability to achieve a functional grasp and recover equilibrium, for the moderately large perturbations and healthy young adults tested in this initial study. PMID:20957351

  2. Cross Validation Through Two-Dimensional Solution Surface for Cost-Sensitive SVM.

    PubMed

    Gu, Bin; Sheng, Victor S; Tay, Keng Yeow; Romano, Walter; Li, Shuo

    2017-06-01

    Model selection plays an important role in cost-sensitive SVM (CS-SVM). It has been proven that the global minimum cross validation (CV) error can be efficiently computed based on the solution path for one parameter learning problems. However, it is a challenge to obtain the global minimum CV error for CS-SVM based on one-dimensional solution path and traditional grid search, because CS-SVM is with two regularization parameters. In this paper, we propose a solution and error surfaces based CV approach (CV-SES). More specifically, we first compute a two-dimensional solution surface for CS-SVM based on a bi-parameter space partition algorithm, which can fit solutions of CS-SVM for all values of both regularization parameters. Then, we compute a two-dimensional validation error surface for each CV fold, which can fit validation errors of CS-SVM for all values of both regularization parameters. Finally, we obtain the CV error surface by superposing K validation error surfaces, which can find the global minimum CV error of CS-SVM. Experiments are conducted on seven datasets for cost sensitive learning and on four datasets for imbalanced learning. Experimental results not only show that our proposed CV-SES has a better generalization ability than CS-SVM with various hybrids between grid search and solution path methods, and than recent proposed cost-sensitive hinge loss SVM with three-dimensional grid search, but also show that CV-SES uses less running time.

  3. Understanding and preventing computer vision syndrome.

    PubMed

    Loh, Ky; Redd, Sc

    2008-01-01

    The invention of computer and advancement in information technology has revolutionized and benefited the society but at the same time has caused symptoms related to its usage such as ocular sprain, irritation, redness, dryness, blurred vision and double vision. This cluster of symptoms is known as computer vision syndrome which is characterized by the visual symptoms which result from interaction with computer display or its environment. Three major mechanisms that lead to computer vision syndrome are extraocular mechanism, accommodative mechanism and ocular surface mechanism. The visual effects of the computer such as brightness, resolution, glare and quality all are known factors that contribute to computer vision syndrome. Prevention is the most important strategy in managing computer vision syndrome. Modification in the ergonomics of the working environment, patient education and proper eye care are crucial in managing computer vision syndrome.

  4. Investigation into the use of smartphone as a machine vision device for engineering metrology and flaw detection, with focus on drilling

    NASA Astrophysics Data System (ADS)

    Razdan, Vikram; Bateman, Richard

    2015-05-01

    This study investigates the use of a Smartphone and its camera vision capabilities in Engineering metrology and flaw detection, with a view to develop a low cost alternative to Machine vision systems which are out of range for small scale manufacturers. A Smartphone has to provide a similar level of accuracy as Machine Vision devices like Smart cameras. The objective set out was to develop an App on an Android Smartphone, incorporating advanced Computer vision algorithms written in java code. The App could then be used for recording measurements of Twist Drill bits and hole geometry, and analysing the results for accuracy. A detailed literature review was carried out for in-depth study of Machine vision systems and their capabilities, including a comparison between the HTC One X Android Smartphone and the Teledyne Dalsa BOA Smart camera. A review of the existing metrology Apps in the market was also undertaken. In addition, the drilling operation was evaluated to establish key measurement parameters of a twist Drill bit, especially flank wear and diameter. The methodology covers software development of the Android App, including the use of image processing algorithms like Gaussian Blur, Sobel and Canny available from OpenCV software library, as well as designing and developing the experimental set-up for carrying out the measurements. The results obtained from the experimental set-up were analysed for geometry of Twist Drill bits and holes, including diametrical measurements and flaw detection. The results show that Smartphones like the HTC One X have the processing power and the camera capability to carry out metrological tasks, although dimensional accuracy achievable from the Smartphone App is below the level provided by Machine vision devices like Smart cameras. A Smartphone with mechanical attachments, capable of image processing and having a reasonable level of accuracy in dimensional measurement, has the potential to become a handy low-cost Machine vision system for small scale manufacturers, especially in field metrology and flaw detection.

  5. Determination of feature generation methods for PTZ camera object tracking

    NASA Astrophysics Data System (ADS)

    Doyle, Daniel D.; Black, Jonathan T.

    2012-06-01

    Object detection and tracking using computer vision (CV) techniques have been widely applied to sensor fusion applications. Many papers continue to be written that speed up performance and increase learning of artificially intelligent systems through improved algorithms, workload distribution, and information fusion. Military application of real-time tracking systems is becoming more and more complex with an ever increasing need of fusion and CV techniques to actively track and control dynamic systems. Examples include the use of metrology systems for tracking and measuring micro air vehicles (MAVs) and autonomous navigation systems for controlling MAVs. This paper seeks to contribute to the determination of select tracking algorithms that best track a moving object using a pan/tilt/zoom (PTZ) camera applicable to both of the examples presented. The select feature generation algorithms compared in this paper are the trained Scale-Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), the Mixture of Gaussians (MoG) background subtraction method, the Lucas- Kanade optical flow method (2000) and the Farneback optical flow method (2003). The matching algorithm used in this paper for the trained feature generation algorithms is the Fast Library for Approximate Nearest Neighbors (FLANN). The BSD licensed OpenCV library is used extensively to demonstrate the viability of each algorithm and its performance. Initial testing is performed on a sequence of images using a stationary camera. Further testing is performed on a sequence of images such that the PTZ camera is moving in order to capture the moving object. Comparisons are made based upon accuracy, speed and memory.

  6. Vector-Based Ground Surface and Object Representation Using Cameras

    DTIC Science & Technology

    2009-12-01

    representations and it is a digital data structure used for the representation of a ground surface in geographical information systems ( GIS ). Figure...Vision API library, and the OpenCV library. Also, the Posix thread library was utilized to quickly capture the source images from cameras. Both

  7. vitisFlower®: Development and Testing of a Novel Android-Smartphone Application for Assessing the Number of Grapevine Flowers per Inflorescence Using Artificial Vision Techniques.

    PubMed

    Aquino, Arturo; Millan, Borja; Gaston, Daniel; Diago, María-Paz; Tardaguila, Javier

    2015-08-28

    Grapevine flowering and fruit set greatly determine crop yield. This paper presents a new smartphone application for automatically counting, non-invasively and directly in the vineyard, the flower number in grapevine inflorescence photos by implementing artificial vision techniques. The application, called vitisFlower(®), firstly guides the user to appropriately take an inflorescence photo using the smartphone's camera. Then, by means of image analysis, the flowers in the image are detected and counted. vitisFlower(®) has been developed for Android devices and uses the OpenCV libraries to maximize computational efficiency. The application was tested on 140 inflorescence images of 11 grapevine varieties taken with two different devices. On average, more than 84% of flowers in the captures were found, with a precision exceeding 94%. Additionally, the application's efficiency on four different devices covering a wide range of the market's spectrum was also studied. The results of this benchmarking study showed significant differences among devices, although indicating that the application is efficiently usable even with low-range devices. vitisFlower is one of the first applications for viticulture that is currently freely available on Google Play.

  8. Drowsy driver mobile application: Development of a novel scleral-area detection method.

    PubMed

    Mohammad, Faisal; Mahadas, Kausalendra; Hung, George K

    2017-10-01

    A reliable and practical app for mobile devices was developed to detect driver drowsiness. It consisted of two main components: a Haar cascade classifier, provided by a computer vision framework called OpenCV, for face/eye detection; and a dedicated JAVA software code for image processing that was applied over a masked region circumscribing the eye. A binary threshold was performed over the masked region to provide a quantitative measure of the number of white pixels in the sclera, which represented the state of eye opening. A continuously low white-pixel count would indicate drowsiness, thereby triggering an alarm to alert the driver. This system was successfully implemented on: (1) a static face image, (2) two subjects under laboratory conditions, and (3) a subject in a vehicle environment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Comprehensive survey of deep learning in remote sensing: theories, tools, and challenges for the community

    NASA Astrophysics Data System (ADS)

    Ball, John E.; Anderson, Derek T.; Chan, Chee Seng

    2017-10-01

    In recent years, deep learning (DL), a rebranding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, and natural language processing. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV, e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should not only be aware of advancements such as DL, but also be leading researchers in this area. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools, and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as they relate to (i) inadequate data sets, (ii) human-understandable solutions for modeling physical phenomena, (iii) big data, (iv) nontraditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial, and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.

  10. Computational approaches to vision

    NASA Technical Reports Server (NTRS)

    Barrow, H. G.; Tenenbaum, J. M.

    1986-01-01

    Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.

  11. Omega-3 chicken egg detection system using a mobile-based image processing segmentation method

    NASA Astrophysics Data System (ADS)

    Nurhayati, Oky Dwi; Kurniawan Teguh, M.; Cintya Amalia, P.

    2017-02-01

    An Omega-3 chicken egg is a chicken egg produced through food engineering technology. It is produced by hen fed with high omega-3 fatty acids. So, it has fifteen times nutrient content of omega-3 higher than Leghorn's. Visually, its shell has the same shape and colour as Leghorn's. Each egg can be distinguished by breaking the egg's shell and testing the egg yolk's nutrient content in a laboratory. But, those methods were proven not effective and efficient. Observing this problem, the purpose of this research is to make an application to detect the type of omega-3 chicken egg by using a mobile-based computer vision. This application was built in OpenCV computer vision library to support Android Operating System. This experiment required some chicken egg images taken using an egg candling box. We used 60 omega-3 chicken and Leghorn eggs as samples. Then, using an Android smartphone, image acquisition of the egg was obtained. After that, we applied several steps using image processing methods such as Grab Cut, convert RGB image to eight bit grayscale, median filter, P-Tile segmentation, and morphology technique in this research. The next steps were feature extraction which was used to extract feature values via mean, variance, skewness, and kurtosis from each image. Finally, using digital image measurement, some chicken egg images were classified. The result showed that omega-3 chicken egg and Leghorn egg had different values. This system is able to provide accurate reading around of 91%.

  12. Comparison of progressive addition lenses for general purpose and for computer vision: an office field study.

    PubMed

    Jaschinski, Wolfgang; König, Mirjam; Mekontso, Tiofil M; Ohlendorf, Arne; Welscher, Monique

    2015-05-01

    Two types of progressive addition lenses (PALs) were compared in an office field study: 1. General purpose PALs with continuous clear vision between infinity and near reading distances and 2. Computer vision PALs with a wider zone of clear vision at the monitor and in near vision but no clear distance vision. Twenty-three presbyopic participants wore each type of lens for two weeks in a double-masked four-week quasi-experimental procedure that included an adaptation phase (Weeks 1 and 2) and a test phase (Weeks 3 and 4). Questionnaires on visual and musculoskeletal conditions as well as preferences regarding the type of lenses were administered. After eight more weeks of free use of the spectacles, the preferences were assessed again. The ergonomic conditions were analysed from photographs. Head inclination when looking at the monitor was significantly lower by 2.3 degrees with the computer vision PALs than with the general purpose PALs. Vision at the monitor was judged significantly better with computer PALs, while distance vision was judged better with general purpose PALs; however, the reported advantage of computer vision PALs differed in extent between participants. Accordingly, 61 per cent of the participants preferred the computer vision PALs, when asked without information about lens design. After full information about lens characteristics and additional eight weeks of free spectacle use, 44 per cent preferred the computer vision PALs. On average, computer vision PALs were rated significantly better with respect to vision at the monitor during the experimental part of the study. In the final forced-choice ratings, approximately half of the participants preferred either the computer vision PAL or the general purpose PAL. Individual factors seem to play a role in this preference and in the rated advantage of computer vision PALs. © 2015 The Authors. Clinical and Experimental Optometry © 2015 Optometry Australia.

  13. A computer program for sample size computations for banding studies

    USGS Publications Warehouse

    Wilson, K.R.; Nichols, J.D.; Hines, J.E.

    1989-01-01

    Sample sizes necessary for estimating survival rates of banded birds, adults and young, are derived based on specified levels of precision. The banding study can be new or ongoing. The desired coefficient of variation (CV) for annual survival estimates, the CV for mean annual survival estimates, and the length of the study must be specified to compute sample sizes. A computer program is available for computation of the sample sizes, and a description of the input and output is provided.

  14. Deep hierarchies in the primate visual cortex: what can we learn for computer vision?

    PubMed

    Krüger, Norbert; Janssen, Peter; Kalkan, Sinan; Lappe, Markus; Leonardis, Ales; Piater, Justus; Rodríguez-Sánchez, Antonio J; Wiskott, Laurenz

    2013-08-01

    Computational modeling of the primate visual system yields insights of potential relevance to some of the challenges that computer vision is facing, such as object recognition and categorization, motion detection and activity recognition, or vision-based navigation and manipulation. This paper reviews some functional principles and structures that are generally thought to underlie the primate visual cortex, and attempts to extract biological principles that could further advance computer vision research. Organized for a computer vision audience, we present functional principles of the processing hierarchies present in the primate visual system considering recent discoveries in neurophysiology. The hierarchical processing in the primate visual system is characterized by a sequence of different levels of processing (on the order of 10) that constitute a deep hierarchy in contrast to the flat vision architectures predominantly used in today's mainstream computer vision. We hope that the functional description of the deep hierarchies realized in the primate visual system provides valuable insights for the design of computer vision algorithms, fostering increasingly productive interaction between biological and computer vision research.

  15. Perceptual organization in computer vision - A review and a proposal for a classificatory structure

    NASA Technical Reports Server (NTRS)

    Sarkar, Sudeep; Boyer, Kim L.

    1993-01-01

    The evolution of perceptual organization in biological vision, and its necessity in advanced computer vision systems, arises from the characteristic that perception, the extraction of meaning from sensory input, is an intelligent process. This is particularly so for high order organisms and, analogically, for more sophisticated computational models. The role of perceptual organization in computer vision systems is explored. This is done from four vantage points. First, a brief history of perceptual organization research in both humans and computer vision is offered. Next, a classificatory structure in which to cast perceptual organization research to clarify both the nomenclature and the relationships among the many contributions is proposed. Thirdly, the perceptual organization work in computer vision in the context of this classificatory structure is reviewed. Finally, the array of computational techniques applied to perceptual organization problems in computer vision is surveyed.

  16. Prediction of Occult Invasive Disease in Ductal Carcinoma in Situ Using Deep Learning Features.

    PubMed

    Shi, Bibo; Grimm, Lars J; Mazurowski, Maciej A; Baker, Jay A; Marks, Jeffrey R; King, Lorraine M; Maley, Carlo C; Hwang, E Shelley; Lo, Joseph Y

    2018-03-01

    The aim of this study was to determine whether deep features extracted from digital mammograms using a pretrained deep convolutional neural network are prognostic of occult invasive disease for patients with ductal carcinoma in situ (DCIS) on core needle biopsy. In this retrospective study, digital mammographic magnification views were collected for 99 subjects with DCIS at biopsy, 25 of which were subsequently upstaged to invasive cancer. A deep convolutional neural network model that was pretrained on nonmedical images (eg, animals, plants, instruments) was used as the feature extractor. Through a statistical pooling strategy, deep features were extracted at different levels of convolutional layers from the lesion areas, without sacrificing the original resolution or distorting the underlying topology. A multivariate classifier was then trained to predict which tumors contain occult invasive disease. This was compared with the performance of traditional "handcrafted" computer vision (CV) features previously developed specifically to assess mammographic calcifications. The generalization performance was assessed using Monte Carlo cross-validation and receiver operating characteristic curve analysis. Deep features were able to distinguish DCIS with occult invasion from pure DCIS, with an area under the receiver operating characteristic curve of 0.70 (95% confidence interval, 0.68-0.73). This performance was comparable with the handcrafted CV features (area under the curve = 0.68; 95% confidence interval, 0.66-0.71) that were designed with prior domain knowledge. Despite being pretrained on only nonmedical images, the deep features extracted from digital mammograms demonstrated comparable performance with handcrafted CV features for the challenging task of predicting DCIS upstaging. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  17. Differential effects of absent visual feedback control on gait variability during different locomotion speeds.

    PubMed

    Wuehr, M; Schniepp, R; Pradhan, C; Ilmberger, J; Strupp, M; Brandt, T; Jahn, K

    2013-01-01

    Healthy persons exhibit relatively small temporal and spatial gait variability when walking unimpeded. In contrast, patients with a sensory deficit (e.g., polyneuropathy) show an increased gait variability that depends on speed and is associated with an increased fall risk. The purpose of this study was to investigate the role of vision in gait stabilization by determining the effects of withdrawing visual information (eyes closed) on gait variability at different locomotion speeds. Ten healthy subjects (32.2 ± 7.9 years, 5 women) walked on a treadmill for 5-min periods at their preferred walking speed and at 20, 40, 70, and 80 % of maximal walking speed during the conditions of walking with eyes open (EO) and with eyes closed (EC). The coefficient of variation (CV) and fractal dimension (α) of the fluctuations in stride time, stride length, and base width were computed and analyzed. Withdrawing visual information increased the base width CV for all walking velocities (p < 0.001). The effects of absent visual information on CV and α of stride time and stride length were most pronounced during slow locomotion (p < 0.001) and declined during fast walking speeds. The results indicate that visual feedback control is used to stabilize the medio-lateral (i.e., base width) gait parameters at all speed sections. In contrast, sensory feedback control in the fore-aft direction (i.e., stride time and stride length) depends on speed. Sensory feedback contributes most to fore-aft gait stabilization during slow locomotion, whereas passive biomechanical mechanisms and an automated central pattern generation appear to control fast locomotion.

  18. From geospatial observations of ocean currents to causal predictors of spatio-economic activity using computer vision and machine learning

    NASA Astrophysics Data System (ADS)

    Popescu, Florin; Ayache, Stephane; Escalera, Sergio; Baró Solé, Xavier; Capponi, Cecile; Panciatici, Patrick; Guyon, Isabelle

    2016-04-01

    The big data transformation currently revolutionizing science and industry forges novel possibilities in multi-modal analysis scarcely imaginable only a decade ago. One of the important economic and industrial problems that stand to benefit from the recent expansion of data availability and computational prowess is the prediction of electricity demand and renewable energy generation. Both are correlates of human activity: spatiotemporal energy consumption patterns in society are a factor of both demand (weather dependent) and supply, which determine cost - a relation expected to strengthen along with increasing renewable energy dependence. One of the main drivers of European weather patterns is the activity of the Atlantic Ocean and in particular its dominant Northern Hemisphere current: the Gulf Stream. We choose this particular current as a test case in part due to larger amount of relevant data and scientific literature available for refinement of analysis techniques. This data richness is due not only to its economic importance but also to its size being clearly visible in radar and infrared satellite imagery, which makes it easier to detect using Computer Vision (CV). The power of CV techniques makes basic analysis thus developed scalable to other smaller and less known, but still influential, currents, which are not just curves on a map, but complex, evolving, moving branching trees in 3D projected onto a 2D image. We investigate means of extracting, from several image modalities (including recently available Copernicus radar and earlier Infrared satellites), a parameterized representation of the state of the Gulf Stream and its environment that is useful as feature space representation in a machine learning context, in this case with the EC's H2020-sponsored 'See.4C' project, in the context of which data scientists may find novel predictors of spatiotemporal energy flow. Although automated extractors of Gulf Stream position exist, they differ in methodology and result. We shall attempt to extract more complex feature representation including branching points, eddies and parameterized changes in transport and velocity. Other related predictive features will be similarly developed, such as inference of deep water flux long the current path and wider spatial scale features such as Hough transform, surface turbulence indicators and temperature gradient indexes along with multi-time scale analysis of ocean height and temperature dynamics. The geospatial imaging and ML community may therefore benefit from a baseline of open-source techniques useful and expandable to other related prediction and/or scientific analysis tasks.

  19. Computer vision syndrome: a review.

    PubMed

    Blehm, Clayton; Vishnu, Seema; Khattak, Ashbala; Mitra, Shrabanee; Yee, Richard W

    2005-01-01

    As computers become part of our everyday life, more and more people are experiencing a variety of ocular symptoms related to computer use. These include eyestrain, tired eyes, irritation, redness, blurred vision, and double vision, collectively referred to as computer vision syndrome. This article describes both the characteristics and treatment modalities that are available at this time. Computer vision syndrome symptoms may be the cause of ocular (ocular-surface abnormalities or accommodative spasms) and/or extraocular (ergonomic) etiologies. However, the major contributor to computer vision syndrome symptoms by far appears to be dry eye. The visual effects of various display characteristics such as lighting, glare, display quality, refresh rates, and radiation are also discussed. Treatment requires a multidirectional approach combining ocular therapy with adjustment of the workstation. Proper lighting, anti-glare filters, ergonomic positioning of computer monitor and regular work breaks may help improve visual comfort. Lubricating eye drops and special computer glasses help relieve ocular surface-related symptoms. More work needs to be done to specifically define the processes that cause computer vision syndrome and to develop and improve effective treatments that successfully address these causes.

  20. vitisFlower®: Development and Testing of a Novel Android-Smartphone Application for Assessing the Number of Grapevine Flowers per Inflorescence Using Artificial Vision Techniques

    PubMed Central

    Aquino, Arturo; Millan, Borja; Gaston, Daniel; Diago, María-Paz; Tardaguila, Javier

    2015-01-01

    Grapevine flowering and fruit set greatly determine crop yield. This paper presents a new smartphone application for automatically counting, non-invasively and directly in the vineyard, the flower number in grapevine inflorescence photos by implementing artificial vision techniques. The application, called vitisFlower®, firstly guides the user to appropriately take an inflorescence photo using the smartphone’s camera. Then, by means of image analysis, the flowers in the image are detected and counted. vitisFlower® has been developed for Android devices and uses the OpenCV libraries to maximize computational efficiency. The application was tested on 140 inflorescence images of 11 grapevine varieties taken with two different devices. On average, more than 84% of flowers in the captures were found, with a precision exceeding 94%. Additionally, the application’s efficiency on four different devices covering a wide range of the market’s spectrum was also studied. The results of this benchmarking study showed significant differences among devices, although indicating that the application is efficiently usable even with low-range devices. vitisFlower is one of the first applications for viticulture that is currently freely available on Google Play. PMID:26343664

  1. Technical Note: A respiratory monitoring and processing system based on computer vision: prototype and proof of principle

    PubMed Central

    Atallah, Vincent; Escarmant, Patrick; Vinh‐Hung, Vincent

    2016-01-01

    Monitoring and controlling respiratory motion is a challenge for the accuracy and safety of therapeutic irradiation of thoracic tumors. Various commercial systems based on the monitoring of internal or external surrogates have been developed but remain costly. In this article we describe and validate Madibreast, an in‐house‐made respiratory monitoring and processing device based on optical tracking of external markers. We designed an optical apparatus to ensure real‐time submillimetric image resolution at 4 m. Using OpenCv libraries, we optically tracked high‐contrast markers set on patients' breasts. Validation of spatial and time accuracy was performed on a mechanical phantom and on human breast. Madibreast was able to track motion of markers up to a 5 cm/s speed, at a frame rate of 30 fps, with submillimetric accuracy on mechanical phantom and human breasts. Latency was below 100 ms. Concomitant monitoring of three different locations on the breast showed discrepancies in axial motion up to 4 mm for deep‐breathing patterns. This low‐cost, computer‐vision system for real‐time motion monitoring of the irradiation of breast cancer patients showed submillimetric accuracy and acceptable latency. It allowed the authors to highlight differences in surface motion that may be correlated to tumor motion. PACS number(s): 87.55.km PMID:27685116

  2. Quaternions in computer vision and robotics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pervin, E.; Webb, J.A.

    1982-01-01

    Computer vision and robotics suffer from not having good tools for manipulating three-dimensional objects. Vectors, coordinate geometry, and trigonometry all have deficiencies. Quaternions can be used to solve many of these problems. Many properties of quaternions that are relevant to computer vision and robotics are developed. Examples are given showing how quaternions can be used to simplify derivations in computer vision and robotics.

  3. Benchmarking neuromorphic vision: lessons learnt from computer vision

    PubMed Central

    Tan, Cheston; Lallee, Stephane; Orchard, Garrick

    2015-01-01

    Neuromorphic Vision sensors have improved greatly since the first silicon retina was presented almost three decades ago. They have recently matured to the point where they are commercially available and can be operated by laymen. However, despite improved availability of sensors, there remains a lack of good datasets, while algorithms for processing spike-based visual data are still in their infancy. On the other hand, frame-based computer vision algorithms are far more mature, thanks in part to widely accepted datasets which allow direct comparison between algorithms and encourage competition. We are presented with a unique opportunity to shape the development of Neuromorphic Vision benchmarks and challenges by leveraging what has been learnt from the use of datasets in frame-based computer vision. Taking advantage of this opportunity, in this paper we review the role that benchmarks and challenges have played in the advancement of frame-based computer vision, and suggest guidelines for the creation of Neuromorphic Vision benchmarks and challenges. We also discuss the unique challenges faced when benchmarking Neuromorphic Vision algorithms, particularly when attempting to provide direct comparison with frame-based computer vision. PMID:26528120

  4. (Computer) Vision without Sight

    PubMed Central

    Manduchi, Roberto; Coughlan, James

    2012-01-01

    Computer vision holds great promise for helping persons with blindness or visual impairments (VI) to interpret and explore the visual world. To this end, it is worthwhile to assess the situation critically by understanding the actual needs of the VI population and which of these needs might be addressed by computer vision. This article reviews the types of assistive technology application areas that have already been developed for VI, and the possible roles that computer vision can play in facilitating these applications. We discuss how appropriate user interfaces are designed to translate the output of computer vision algorithms into information that the user can quickly and safely act upon, and how system-level characteristics affect the overall usability of an assistive technology. Finally, we conclude by highlighting a few novel and intriguing areas of application of computer vision to assistive technology. PMID:22815563

  5. Microscope self-calibration based on micro laser line imaging and soft computing algorithms

    NASA Astrophysics Data System (ADS)

    Apolinar Muñoz Rodríguez, J.

    2018-06-01

    A technique to perform microscope self-calibration via micro laser line and soft computing algorithms is presented. In this technique, the microscope vision parameters are computed by means of soft computing algorithms based on laser line projection. To implement the self-calibration, a microscope vision system is constructed by means of a CCD camera and a 38 μm laser line. From this arrangement, the microscope vision parameters are represented via Bezier approximation networks, which are accomplished through the laser line position. In this procedure, a genetic algorithm determines the microscope vision parameters by means of laser line imaging. Also, the approximation networks compute the three-dimensional vision by means of the laser line position. Additionally, the soft computing algorithms re-calibrate the vision parameters when the microscope vision system is modified during the vision task. The proposed self-calibration improves accuracy of the traditional microscope calibration, which is accomplished via external references to the microscope system. The capability of the self-calibration based on soft computing algorithms is determined by means of the calibration accuracy and the micro-scale measurement error. This contribution is corroborated by an evaluation based on the accuracy of the traditional microscope calibration.

  6. On the performances of computer vision algorithms on mobile platforms

    NASA Astrophysics Data System (ADS)

    Battiato, S.; Farinella, G. M.; Messina, E.; Puglisi, G.; Ravì, D.; Capra, A.; Tomaselli, V.

    2012-01-01

    Computer Vision enables mobile devices to extract the meaning of the observed scene from the information acquired with the onboard sensor cameras. Nowadays, there is a growing interest in Computer Vision algorithms able to work on mobile platform (e.g., phone camera, point-and-shot-camera, etc.). Indeed, bringing Computer Vision capabilities on mobile devices open new opportunities in different application contexts. The implementation of vision algorithms on mobile devices is still a challenging task since these devices have poor image sensors and optics as well as limited processing power. In this paper we have considered different algorithms covering classic Computer Vision tasks: keypoint extraction, face detection, image segmentation. Several tests have been done to compare the performances of the involved mobile platforms: Nokia N900, LG Optimus One, Samsung Galaxy SII.

  7. Dynamic Deployment Simulations of Inflatable Space Structures

    NASA Technical Reports Server (NTRS)

    Wang, John T.

    2005-01-01

    The feasibility of using Control Volume (CV) method and the Arbitrary Lagrangian Eulerian (ALE) method in LSDYNA to simulate the dynamic deployment of inflatable space structures is investigated. The CV and ALE methods were used to predict the inflation deployments of three folded tube configurations. The CV method was found to be a simple and computationally efficient method that may be adequate for modeling slow inflation deployment sine the inertia of the inflation gas can be neglected. The ALE method was found to be very computationally intensive since it involves the solving of three conservative equations of fluid as well as dealing with complex fluid structure interactions.

  8. Bias in error estimation when using cross-validation for model selection.

    PubMed

    Varma, Sudhir; Simon, Richard

    2006-02-23

    Cross-validation (CV) is an effective method for estimating the prediction error of a classifier. Some recent articles have proposed methods for optimizing classifiers by choosing classifier parameter values that minimize the CV error estimate. We have evaluated the validity of using the CV error estimate of the optimized classifier as an estimate of the true error expected on independent data. We used CV to optimize the classification parameters for two kinds of classifiers; Shrunken Centroids and Support Vector Machines (SVM). Random training datasets were created, with no difference in the distribution of the features between the two classes. Using these "null" datasets, we selected classifier parameter values that minimized the CV error estimate. 10-fold CV was used for Shrunken Centroids while Leave-One-Out-CV (LOOCV) was used for the SVM. Independent test data was created to estimate the true error. With "null" and "non null" (with differential expression between the classes) data, we also tested a nested CV procedure, where an inner CV loop is used to perform the tuning of the parameters while an outer CV is used to compute an estimate of the error. The CV error estimate for the classifier with the optimal parameters was found to be a substantially biased estimate of the true error that the classifier would incur on independent data. Even though there is no real difference between the two classes for the "null" datasets, the CV error estimate for the Shrunken Centroid with the optimal parameters was less than 30% on 18.5% of simulated training data-sets. For SVM with optimal parameters the estimated error rate was less than 30% on 38% of "null" data-sets. Performance of the optimized classifiers on the independent test set was no better than chance. The nested CV procedure reduces the bias considerably and gives an estimate of the error that is very close to that obtained on the independent testing set for both Shrunken Centroids and SVM classifiers for "null" and "non-null" data distributions. We show that using CV to compute an error estimate for a classifier that has itself been tuned using CV gives a significantly biased estimate of the true error. Proper use of CV for estimating true error of a classifier developed using a well defined algorithm requires that all steps of the algorithm, including classifier parameter tuning, be repeated in each CV loop. A nested CV procedure provides an almost unbiased estimate of the true error.

  9. Pyramidal neurovision architecture for vision machines

    NASA Astrophysics Data System (ADS)

    Gupta, Madan M.; Knopf, George K.

    1993-08-01

    The vision system employed by an intelligent robot must be active; active in the sense that it must be capable of selectively acquiring the minimal amount of relevant information for a given task. An efficient active vision system architecture that is based loosely upon the parallel-hierarchical (pyramidal) structure of the biological visual pathway is presented in this paper. Although the computational architecture of the proposed pyramidal neuro-vision system is far less sophisticated than the architecture of the biological visual pathway, it does retain some essential features such as the converging multilayered structure of its biological counterpart. In terms of visual information processing, the neuro-vision system is constructed from a hierarchy of several interactive computational levels, whereupon each level contains one or more nonlinear parallel processors. Computationally efficient vision machines can be developed by utilizing both the parallel and serial information processing techniques within the pyramidal computing architecture. A computer simulation of a pyramidal vision system for active scene surveillance is presented.

  10. Computer vision in the poultry industry

    USDA-ARS?s Scientific Manuscript database

    Computer vision is becoming increasingly important in the poultry industry due to increasing use and speed of automation in processing operations. Growing awareness of food safety concerns has helped add food safety inspection to the list of tasks that automated computer vision can assist. Researc...

  11. [Comparison study between biological vision and computer vision].

    PubMed

    Liu, W; Yuan, X G; Yang, C X; Liu, Z Q; Wang, R

    2001-08-01

    The development and bearing of biology vision in structure and mechanism were discussed, especially on the aspects including anatomical structure of biological vision, tentative classification of reception field, parallel processing of visual information, feedback and conformity effect of visual cortical, and so on. The new advance in the field was introduced through the study of the morphology of biological vision. Besides, comparison between biological vision and computer vision was made, and their similarities and differences were pointed out.

  12. Medical-grade Sterilizable Target for Fluid-immersed Fetoscope Optical Distortion Calibration.

    PubMed

    Nikitichev, Daniil I; Shakir, Dzhoshkun I; Chadebecq, François; Tella, Marcel; Deprest, Jan; Stoyanov, Danail; Ourselin, Sébastien; Vercauteren, Tom

    2017-02-23

    We have developed a calibration target for use with fluid-immersed endoscopes within the context of the GIFT-Surg (Guided Instrumentation for Fetal Therapy and Surgery) project. One of the aims of this project is to engineer novel, real-time image processing methods for intra-operative use in the treatment of congenital birth defects, such as spina bifida and the twin-to-twin transfusion syndrome. The developed target allows for the sterility-preserving optical distortion calibration of endoscopes within a few minutes. Good optical distortion calibration and compensation are important for mitigating undesirable effects like radial distortions, which not only hamper accurate imaging using existing endoscopic technology during fetal surgery, but also make acquired images less suitable for potentially very useful image computing applications, like real-time mosaicing. In this paper proposes a novel fabrication method to create an affordable, sterilizable calibration target suitable for use in a clinical setup. This method involves etching a calibration pattern by laser cutting a sandblasted stainless steel sheet. This target was validated using the camera calibration module provided by OpenCV, a state-of-the-art software library popular in the computer vision community.

  13. Terabytes to Megabytes: Data Reduction Onsite for Remote Limited Bandwidth Systems

    NASA Astrophysics Data System (ADS)

    Hirsch, M.

    2016-12-01

    Inexpensive, battery-powerable embedded computer systems such as the Intel Edison and Raspberry Pi have inspired makers of all ages to create and deploy sensor systems. Geoscientists are also leveraging such inexpensive embedded computers for solar-powered or other low-resource utilization systems for ionospheric observation. We have developed OpenCV-based machine vision algorithms to reduce terabytes per night of high-speed aurora video data down to megabytes of data to aid in automated sifting and retention of high-value data from the mountains of less interesting data. Given prohibitively expensive data connections in many parts of the world, such techniques may be generalizable to more than just the auroral video and passive FM radar implemented so far. After the automated algorithm decides which data to keep, automated upload and distribution techniques are relevant to avoid excessive delay and consumption of researcher time. Open-source collaborative software development enables data audiences from experts through citizen enthusiasts to access the data and make exciting plots. Open software and data aids in cross-disciplinary collaboration opportunities, STEM outreach and increasing public awareness of the contributions each geoscience data collection system makes.

  14. Medical-grade Sterilizable Target for Fluid-immersed Fetoscope Optical Distortion Calibration

    PubMed Central

    Chadebecq, François; Tella, Marcel; Deprest, Jan; Stoyanov, Danail; Ourselin, Sébastien; Vercauteren, Tom

    2017-01-01

    We have developed a calibration target for use with fluid-immersed endoscopes within the context of the GIFT-Surg (Guided Instrumentation for Fetal Therapy and Surgery) project. One of the aims of this project is to engineer novel, real-time image processing methods for intra-operative use in the treatment of congenital birth defects, such as spina bifida and the twin-to-twin transfusion syndrome. The developed target allows for the sterility-preserving optical distortion calibration of endoscopes within a few minutes. Good optical distortion calibration and compensation are important for mitigating undesirable effects like radial distortions, which not only hamper accurate imaging using existing endoscopic technology during fetal surgery, but also make acquired images less suitable for potentially very useful image computing applications, like real-time mosaicing. In this paper proposes a novel fabrication method to create an affordable, sterilizable calibration target suitable for use in a clinical setup. This method involves etching a calibration pattern by laser cutting a sandblasted stainless steel sheet. This target was validated using the camera calibration module provided by OpenCV, a state-of-the-art software library popular in the computer vision community. PMID:28287588

  15. Reinforcement learning in computer vision

    NASA Astrophysics Data System (ADS)

    Bernstein, A. V.; Burnaev, E. V.

    2018-04-01

    Nowadays, machine learning has become one of the basic technologies used in solving various computer vision tasks such as feature detection, image segmentation, object recognition and tracking. In many applications, various complex systems such as robots are equipped with visual sensors from which they learn state of surrounding environment by solving corresponding computer vision tasks. Solutions of these tasks are used for making decisions about possible future actions. It is not surprising that when solving computer vision tasks we should take into account special aspects of their subsequent application in model-based predictive control. Reinforcement learning is one of modern machine learning technologies in which learning is carried out through interaction with the environment. In recent years, Reinforcement learning has been used both for solving such applied tasks as processing and analysis of visual information, and for solving specific computer vision problems such as filtering, extracting image features, localizing objects in scenes, and many others. The paper describes shortly the Reinforcement learning technology and its use for solving computer vision problems.

  16. Neural Networks for Computer Vision: A Framework for Specifications of a General Purpose Vision System

    NASA Astrophysics Data System (ADS)

    Skrzypek, Josef; Mesrobian, Edmond; Gungner, David J.

    1989-03-01

    The development of autonomous land vehicles (ALV) capable of operating in an unconstrained environment has proven to be a formidable research effort. The unpredictability of events in such an environment calls for the design of a robust perceptual system, an impossible task requiring the programming of a system bases on the expectation of future, unconstrained events. Hence, the need for a "general purpose" machine vision system that is capable of perceiving and understanding images in an unconstrained environment in real-time. The research undertaken at the UCLA Machine Perception Laboratory addresses this need by focusing on two specific issues: 1) the long term goals for machine vision research as a joint effort between the neurosciences and computer science; and 2) a framework for evaluating progress in machine vision. In the past, vision research has been carried out independently within different fields including neurosciences, psychology, computer science, and electrical engineering. Our interdisciplinary approach to vision research is based on the rigorous combination of computational neuroscience, as derived from neurophysiology and neuropsychology, with computer science and electrical engineering. The primary motivation behind our approach is that the human visual system is the only existing example of a "general purpose" vision system and using a neurally based computing substrate, it can complete all necessary visual tasks in real-time.

  17. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  18. Computer vision for foreign body detection and removal in the food industry

    USDA-ARS?s Scientific Manuscript database

    Computer vision inspection systems are often used for quality control, product grading, defect detection and other product evaluation issues. This chapter focuses on the use of computer vision inspection systems that detect foreign bodies and remove them from the product stream. Specifically, we wi...

  19. Chapter 11. Quality evaluation of apple by computer vision

    USDA-ARS?s Scientific Manuscript database

    Apple is one of the most consumed fruits in the world, and there is a critical need for enhanced computer vision technology for quality assessment of apples. This chapter gives a comprehensive review on recent advances in various computer vision techniques for detecting surface and internal defects ...

  20. Deep Learning for Computer Vision: A Brief Review

    PubMed Central

    Doulamis, Nikolaos; Doulamis, Anastasios; Protopapadakis, Eftychios

    2018-01-01

    Over the last years deep learning methods have been shown to outperform previous state-of-the-art machine learning techniques in several fields, with computer vision being one of the most prominent cases. This review paper provides a brief overview of some of the most significant deep learning schemes used in computer vision problems, that is, Convolutional Neural Networks, Deep Boltzmann Machines and Deep Belief Networks, and Stacked Denoising Autoencoders. A brief account of their history, structure, advantages, and limitations is given, followed by a description of their applications in various computer vision tasks, such as object detection, face recognition, action and activity recognition, and human pose estimation. Finally, a brief overview is given of future directions in designing deep learning schemes for computer vision problems and the challenges involved therein. PMID:29487619

  1. A computer vision for animal ecology.

    PubMed

    Weinstein, Ben G

    2018-05-01

    A central goal of animal ecology is to observe species in the natural world. The cost and challenge of data collection often limit the breadth and scope of ecological study. Ecologists often use image capture to bolster data collection in time and space. However, the ability to process these images remains a bottleneck. Computer vision can greatly increase the efficiency, repeatability and accuracy of image review. Computer vision uses image features, such as colour, shape and texture to infer image content. I provide a brief primer on ecological computer vision to outline its goals, tools and applications to animal ecology. I reviewed 187 existing applications of computer vision and divided articles into ecological description, counting and identity tasks. I discuss recommendations for enhancing the collaboration between ecologists and computer scientists and highlight areas for future growth of automated image analysis. © 2017 The Author. Journal of Animal Ecology © 2017 British Ecological Society.

  2. Leaf-GP: an open and automated software application for measuring growth phenotypes for arabidopsis and wheat.

    PubMed

    Zhou, Ji; Applegate, Christopher; Alonso, Albor Dobon; Reynolds, Daniel; Orford, Simon; Mackiewicz, Michal; Griffiths, Simon; Penfield, Steven; Pullen, Nick

    2017-01-01

    Plants demonstrate dynamic growth phenotypes that are determined by genetic and environmental factors. Phenotypic analysis of growth features over time is a key approach to understand how plants interact with environmental change as well as respond to different treatments. Although the importance of measuring dynamic growth traits is widely recognised, available open software tools are limited in terms of batch image processing, multiple traits analyses, software usability and cross-referencing results between experiments, making automated phenotypic analysis problematic. Here, we present Leaf-GP (Growth Phenotypes), an easy-to-use and open software application that can be executed on different computing platforms. To facilitate diverse scientific communities, we provide three software versions, including a graphic user interface (GUI) for personal computer (PC) users, a command-line interface for high-performance computer (HPC) users, and a well-commented interactive Jupyter Notebook (also known as the iPython Notebook) for computational biologists and computer scientists. The software is capable of extracting multiple growth traits automatically from large image datasets. We have utilised it in Arabidopsis thaliana and wheat ( Triticum aestivum ) growth studies at the Norwich Research Park (NRP, UK). By quantifying a number of growth phenotypes over time, we have identified diverse plant growth patterns between different genotypes under several experimental conditions. As Leaf-GP has been evaluated with noisy image series acquired by different imaging devices (e.g. smartphones and digital cameras) and still produced reliable biological outputs, we therefore believe that our automated analysis workflow and customised computer vision based feature extraction software implementation can facilitate a broader plant research community for their growth and development studies. Furthermore, because we implemented Leaf-GP based on open Python-based computer vision, image analysis and machine learning libraries, we believe that our software not only can contribute to biological research, but also demonstrates how to utilise existing open numeric and scientific libraries (e.g. Scikit-image, OpenCV, SciPy and Scikit-learn) to build sound plant phenomics analytic solutions, in a efficient and effective way. Leaf-GP is a sophisticated software application that provides three approaches to quantify growth phenotypes from large image series. We demonstrate its usefulness and high accuracy based on two biological applications: (1) the quantification of growth traits for Arabidopsis genotypes under two temperature conditions; and (2) measuring wheat growth in the glasshouse over time. The software is easy-to-use and cross-platform, which can be executed on Mac OS, Windows and HPC, with open Python-based scientific libraries preinstalled. Our work presents the advancement of how to integrate computer vision, image analysis, machine learning and software engineering in plant phenomics software implementation. To serve the plant research community, our modulated source code, detailed comments, executables (.exe for Windows; .app for Mac), and experimental results are freely available at https://github.com/Crop-Phenomics-Group/Leaf-GP/releases.

  3. Computer vision-based sorting of Atlantic salmon (Salmo salar) fillets according to their color level.

    PubMed

    Misimi, E; Mathiassen, J R; Erikson, U

    2007-01-01

    Computer vision method was used to evaluate the color of Atlantic salmon (Salmo salar) fillets. Computer vision-based sorting of fillets according to their color was studied on 2 separate groups of salmon fillets. The images of fillets were captured using a digital camera of high resolution. Images of salmon fillets were then segmented in the regions of interest and analyzed in red, green, and blue (RGB) and CIE Lightness, redness, and yellowness (Lab) color spaces, and classified according to the Roche color card industrial standard. Comparisons of fillet color between visual evaluations were made by a panel of human inspectors, according to the Roche SalmoFan lineal standard, and the color scores generated from computer vision algorithm showed that there were no significant differences between the methods. Overall, computer vision can be used as a powerful tool to sort fillets by color in a fast and nondestructive manner. The low cost of implementing computer vision solutions creates the potential to replace manual labor in fish processing plants with automation.

  4. Low-density lipoprotein cholesterol levels and lipid-modifying therapy prescription patterns in the real world: An analysis of more than 33,000 high cardiovascular risk patients in Japan.

    PubMed

    Teramoto, Tamio; Uno, Kiyoko; Miyoshi, Izuru; Khan, Irfan; Gorcyca, Katherine; Sanchez, Robert J; Yoshida, Shigeto; Mawatari, Kazuhiro; Masaki, Tomoya; Arai, Hidenori; Yamashita, Shizuya

    2016-08-01

    Low-density lipoprotein cholesterol (LDL-C) is a key modifiable risk factor in the development of cardiovascular (CV) disease. In 2012, the Japan Atherosclerosis Society (JAS) issued guidelines recommending statins as first-line pharmacotherapy for lowering LDL-C in patients at high risk for CV events. This study assessed achievement of recommended LDL-C goals and lipid-modifying therapy (LMT) use in a high CV risk population in Japan. Patients from the Medical Data Vision (MDV) database, an electronic hospital-based claims database in Japan, who met the following inclusion criteria were included in this study: LDL-C measurement in 2013; ≥20 years of age; ≥2 years representation in the database; and a high CV risk condition (recent acute coronary syndrome (ACS), other coronary heart disease (CHD), ischemic stroke, peripheral arterial disease (PAD) or diabetes). LDL-C goal attainment was assessed based on LDL-C targets in the JAS guidelines. A total of 33,325 high CV risk patients met the inclusion criteria. Overall, 68% of the cohort achieved guideline recommended LDL-C targets, with only 42% receiving current treatment with statins. Attainment of LDL-C goals was 68% for ACS, 55% for CHD, and 80% each for ischemic stroke, PAD, and diabetes patients. Concomitant use of non-statin LMTs was low. In a high CV risk population in a routine care setting in Japan, guideline recommended LDL-C goal attainment and utilization of statins and other LMT was low. In addition, physicians appeared to be more likely to consider the initiation of statins in patients with higher baseline LDL-C levels. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  5. Machine Learning, deep learning and optimization in computer vision

    NASA Astrophysics Data System (ADS)

    Canu, Stéphane

    2017-03-01

    As quoted in the Large Scale Computer Vision Systems NIPS workshop, computer vision is a mature field with a long tradition of research, but recent advances in machine learning, deep learning, representation learning and optimization have provided models with new capabilities to better understand visual content. The presentation will go through these new developments in machine learning covering basic motivations, ideas, models and optimization in deep learning for computer vision, identifying challenges and opportunities. It will focus on issues related with large scale learning that is: high dimensional features, large variety of visual classes, and large number of examples.

  6. Reproducibility of a peripheral quantitative computed tomography scan protocol to measure the material properties of the second metatarsal.

    PubMed

    Chaplais, Elodie; Greene, David; Hood, Anita; Telfer, Scott; du Toit, Verona; Singh-Grewal, Davinder; Burns, Joshua; Rome, Keith; Schiferl, Daniel J; Hendry, Gordon J

    2014-07-19

    Peripheral quantitative computed tomography (pQCT) is an established technology that allows for the measurement of the material properties of bone. Alterations to bone architecture are associated with an increased risk of fracture. Further pQCT research is necessary to identify regions of interest that are prone to fracture risk in people with chronic diseases. The second metatarsal is a common site for the development of insufficiency fractures, and as such the aim of this study was to assess the reproducibility of a novel scanning protocol of the second metatarsal using pQCT. Eleven embalmed cadaveric leg specimens were scanned six times; three times with and without repositioning. Each foot was positioned on a custom-designed acrylic foot plate to permit unimpeded scans of the region of interest. Sixty-six scans were obtained at 15% (distal) and 50% (mid shaft) of the second metatarsal. Voxel size and scan speed were reduced to 0.40 mm and 25 mm.sec(-1). The reference line was positioned at the most distal portion of the 2(nd) metatarsal. Repeated measurements of six key variables related to bone properties were subject to reproducibility testing. Data were log transformed and reproducibility of scans were assessed using intraclass correlation coefficients (ICC) and coefficients of variation (CV%). Reproducibility of the measurements without repositioning were estimated as: trabecular area (ICC 0.95; CV% 2.4), trabecular density (ICC 0.98; CV% 3.0), Strength Strain Index (SSI) - distal (ICC 0.99; CV% 5.6), cortical area (ICC 1.0; CV% 1.5), cortical density (ICC 0.99; CV% 0.1), SSI - mid shaft (ICC 1.0; CV% 2.4). Reproducibility of the measurements after repositioning were estimated as: trabecular area (ICC 0.96; CV% 2.4), trabecular density (ICC 0.98; CV% 2.8), SSI - distal (ICC 1.0; CV% 3.5), cortical area (ICC 0.99; CV%2.4), cortical density (ICC 0.98; CV% 0.8), SSI - mid shaft (ICC 0.99; CV% 3.2). The scanning protocol generated excellent reproducibility for key bone properties measured at the distal and mid-shaft regions of the 2(nd) metatarsal. This protocol extends the capabilities of pQCT to evaluate bone quality in people who may be at an increased risk of metatarsal insufficiency fractures.

  7. Reproducibility of a peripheral quantitative computed tomography scan protocol to measure the material properties of the second metatarsal

    PubMed Central

    2014-01-01

    Background Peripheral quantitative computed tomography (pQCT) is an established technology that allows for the measurement of the material properties of bone. Alterations to bone architecture are associated with an increased risk of fracture. Further pQCT research is necessary to identify regions of interest that are prone to fracture risk in people with chronic diseases. The second metatarsal is a common site for the development of insufficiency fractures, and as such the aim of this study was to assess the reproducibility of a novel scanning protocol of the second metatarsal using pQCT. Methods Eleven embalmed cadaveric leg specimens were scanned six times; three times with and without repositioning. Each foot was positioned on a custom-designed acrylic foot plate to permit unimpeded scans of the region of interest. Sixty-six scans were obtained at 15% (distal) and 50% (mid shaft) of the second metatarsal. Voxel size and scan speed were reduced to 0.40 mm and 25 mm.sec-1. The reference line was positioned at the most distal portion of the 2nd metatarsal. Repeated measurements of six key variables related to bone properties were subject to reproducibility testing. Data were log transformed and reproducibility of scans were assessed using intraclass correlation coefficients (ICC) and coefficients of variation (CV%). Results Reproducibility of the measurements without repositioning were estimated as: trabecular area (ICC 0.95; CV% 2.4), trabecular density (ICC 0.98; CV% 3.0), Strength Strain Index (SSI) - distal (ICC 0.99; CV% 5.6), cortical area (ICC 1.0; CV% 1.5), cortical density (ICC 0.99; CV% 0.1), SSI – mid shaft (ICC 1.0; CV% 2.4). Reproducibility of the measurements after repositioning were estimated as: trabecular area (ICC 0.96; CV% 2.4), trabecular density (ICC 0.98; CV% 2.8), SSI - distal (ICC 1.0; CV% 3.5), cortical area (ICC 0.99; CV%2.4), cortical density (ICC 0.98; CV% 0.8), SSI – mid shaft (ICC 0.99; CV% 3.2). Conclusions The scanning protocol generated excellent reproducibility for key bone properties measured at the distal and mid-shaft regions of the 2nd metatarsal. This protocol extends the capabilities of pQCT to evaluate bone quality in people who may be at an increased risk of metatarsal insufficiency fractures. PMID:25037451

  8. Incremental prognostic value of kidney function decline over coronary artery disease for cardiovascular event prediction after coronary computed tomography.

    PubMed

    Bittencourt, Marcio S; Hulten, Edward A; Ghoshhajra, Brian; Abbara, Suhny; Murthy, Venkatesh L; Divakaran, Sanjay; Nasir, Khurram; Gowdak, Luis Henrique W; Riella, Leonardo V; Chiumiento, Marco; Hoffmann, Udo; Di Carli, Marcelo F; Blankstein, Ron

    2015-07-01

    It is unknown whether mild chronic kidney disease (CKD) is associated with adverse cardiovascular (CV) prognosis after accounting for coronary artery disease (CAD). Here we evaluated the interplay between CKD and CAD in predicting CV death or myocardial infarction (MI) and all-cause death. We included 1541 consecutive patients in the Partners registry (mean age 55 years, 43% female) over 18 years old with no known prior CAD who underwent coronary computed tomography angiography (CCTA). The results of CCTA were categorized as normal, nonobstructive (under half), or obstructive (half and over). Overall, 653 of the patients had no CAD, 583 had nonobstructive CAD, and 305 had obstructive CAD, while 1299 had eGFR over 60 ml/min per 1.73 m(2) and 242 had an eGFR under this value. The presence and severity of CAD was significantly associated with an increased rate of CV death or MI and all-cause death, even after adjustment for age, gender, symptoms, and risk factors. Similarly, reduced eGFR was significantly associated with CV death or MI and all-cause death after similar adjustment. The addition of reduced GFR to a model which included both clinical variables and CCTA findings resulted in significant improvement in the prediction of CV death or MI and all-cause death. Thus, among individuals referred for CCTA to evaluate CAD, renal dysfunction is associated with an increased rate of CV events, mainly driven by an increase in the rate of noncoronary CV events. In this group of patients, both eGFR and the presence and severity of CAD together improve the prediction of future CV events and death.

  9. 3-D Signal Processing in a Computer Vision System

    Treesearch

    Dongping Zhu; Richard W. Conners; Philip A. Araman

    1991-01-01

    This paper discusses the problem of 3-dimensional image filtering in a computer vision system that would locate and identify internal structural failure. In particular, a 2-dimensional adaptive filter proposed by Unser has been extended to 3-dimension. In conjunction with segmentation and labeling, the new filter has been used in the computer vision system to...

  10. An overview of computer vision

    NASA Technical Reports Server (NTRS)

    Gevarter, W. B.

    1982-01-01

    An overview of computer vision is provided. Image understanding and scene analysis are emphasized, and pertinent aspects of pattern recognition are treated. The basic approach to computer vision systems, the techniques utilized, applications, the current existing systems and state-of-the-art issues and research requirements, who is doing it and who is funding it, and future trends and expectations are reviewed.

  11. Experiences Using an Open Source Software Library to Teach Computer Vision Subjects

    ERIC Educational Resources Information Center

    Cazorla, Miguel; Viejo, Diego

    2015-01-01

    Machine vision is an important subject in computer science and engineering degrees. For laboratory experimentation, it is desirable to have a complete and easy-to-use tool. In this work we present a Java library, oriented to teaching computer vision. We have designed and built the library from the scratch with emphasis on readability and…

  12. Detecting Motion from a Moving Platform; Phase 3: Unification of Control and Sensing for More Advanced Situational Awareness

    DTIC Science & Technology

    2011-11-01

    RX-TY-TR-2011-0096-01) develops a novel computer vision sensor based upon the biological vision system of the common housefly , Musca domestica...01 summarizes the development of a novel computer vision sensor based upon the biological vision system of the common housefly , Musca domestica

  13. Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades.

    PubMed

    Orchard, Garrick; Jayawant, Ajinkya; Cohen, Gregory K; Thakor, Nitish

    2015-01-01

    Creating datasets for Neuromorphic Vision is a challenging task. A lack of available recordings from Neuromorphic Vision sensors means that data must typically be recorded specifically for dataset creation rather than collecting and labeling existing data. The task is further complicated by a desire to simultaneously provide traditional frame-based recordings to allow for direct comparison with traditional Computer Vision algorithms. Here we propose a method for converting existing Computer Vision static image datasets into Neuromorphic Vision datasets using an actuated pan-tilt camera platform. Moving the sensor rather than the scene or image is a more biologically realistic approach to sensing and eliminates timing artifacts introduced by monitor updates when simulating motion on a computer monitor. We present conversion of two popular image datasets (MNIST and Caltech101) which have played important roles in the development of Computer Vision, and we provide performance metrics on these datasets using spike-based recognition algorithms. This work contributes datasets for future use in the field, as well as results from spike-based algorithms against which future works can compare. Furthermore, by converting datasets already popular in Computer Vision, we enable more direct comparison with frame-based approaches.

  14. Vision-Based UAV Flight Control and Obstacle Avoidance

    DTIC Science & Technology

    2006-01-01

    denoted it by Vb = (Vb1, Vb2 , Vb3). Fig. 2 shows the block diagram of the proposed vision-based motion analysis and obstacle avoidance system. We denote...structure analysis often involve computation- intensive computer vision tasks, such as feature extraction and geometric modeling. Computation-intensive...First, we extract a set of features from each block. 2) Second, we compute the distance between these two sets of features. In conventional motion

  15. Quality grading of Atlantic salmon (Salmo salar) by computer vision.

    PubMed

    Misimi, E; Erikson, U; Skavhaug, A

    2008-06-01

    In this study, we present a promising method of computer vision-based quality grading of whole Atlantic salmon (Salmo salar). Using computer vision, it was possible to differentiate among different quality grades of Atlantic salmon based on the external geometrical information contained in the fish images. Initially, before the image acquisition, the fish were subjectively graded and labeled into grading classes by a qualified human inspector in the processing plant. Prior to classification, the salmon images were segmented into binary images, and then feature extraction was performed on the geometrical parameters of the fish from the grading classes. The classification algorithm was a threshold-based classifier, which was designed using linear discriminant analysis. The performance of the classifier was tested by using the leave-one-out cross-validation method, and the classification results showed a good agreement between the classification done by human inspectors and by the computer vision. The computer vision-based method classified correctly 90% of the salmon from the data set as compared with the classification by human inspector. Overall, it was shown that computer vision can be used as a powerful tool to grade Atlantic salmon into quality grades in a fast and nondestructive manner by a relatively simple classifier algorithm. The low cost of implementation of today's advanced computer vision solutions makes this method feasible for industrial purposes in fish plants as it can replace manual labor, on which grading tasks still rely.

  16. Using parallel evolutionary development for a biologically-inspired computer vision system for mobile robots.

    PubMed

    Wright, Cameron H G; Barrett, Steven F; Pack, Daniel J

    2005-01-01

    We describe a new approach to attacking the problem of robust computer vision for mobile robots. The overall strategy is to mimic the biological evolution of animal vision systems. Our basic imaging sensor is based upon the eye of the common house fly, Musca domestica. The computational algorithms are a mix of traditional image processing, subspace techniques, and multilayer neural networks.

  17. Progress in computer vision.

    NASA Astrophysics Data System (ADS)

    Jain, A. K.; Dorai, C.

    Computer vision has emerged as a challenging and important area of research, both as an engineering and a scientific discipline. The growing importance of computer vision is evident from the fact that it was identified as one of the "Grand Challenges" and also from its prominent role in the National Information Infrastructure. While the design of a general-purpose vision system continues to be elusive machine vision systems are being used successfully in specific application elusive, machine vision systems are being used successfully in specific application domains. Building a practical vision system requires a careful selection of appropriate sensors, extraction and integration of information from available cues in the sensed data, and evaluation of system robustness and performance. The authors discuss and demonstrate advantages of (1) multi-sensor fusion, (2) combination of features and classifiers, (3) integration of visual modules, and (IV) admissibility and goal-directed evaluation of vision algorithms. The requirements of several prominent real world applications such as biometry, document image analysis, image and video database retrieval, and automatic object model construction offer exciting problems and new opportunities to design and evaluate vision algorithms.

  18. Can Humans Fly Action Understanding with Multiple Classes of Actors

    DTIC Science & Technology

    2015-06-08

    recognition using structure from motion point clouds. In European Conference on Computer Vision, 2008. [5] R. Caruana. Multitask learning. Machine Learning...tonomous driving ? the kitti vision benchmark suite. In IEEE Conference on Computer Vision and Pattern Recognition, 2012. [12] L. Gorelick, M. Blank

  19. Computer vision in cell biology.

    PubMed

    Danuser, Gaudenz

    2011-11-23

    Computer vision refers to the theory and implementation of artificial systems that extract information from images to understand their content. Although computers are widely used by cell biologists for visualization and measurement, interpretation of image content, i.e., the selection of events worth observing and the definition of what they mean in terms of cellular mechanisms, is mostly left to human intuition. This Essay attempts to outline roles computer vision may play and should play in image-based studies of cellular life. Copyright © 2011 Elsevier Inc. All rights reserved.

  20. Computer Vision Syndrome.

    PubMed

    Randolph, Susan A

    2017-07-01

    With the increased use of electronic devices with visual displays, computer vision syndrome is becoming a major public health issue. Improving the visual status of workers using computers results in greater productivity in the workplace and improved visual comfort.

  1. Measuring exertion time, duty cycle and hand activity level for industrial tasks using computer vision.

    PubMed

    Akkas, Oguz; Lee, Cheng Hsien; Hu, Yu Hen; Harris Adamson, Carisa; Rempel, David; Radwin, Robert G

    2017-12-01

    Two computer vision algorithms were developed to automatically estimate exertion time, duty cycle (DC) and hand activity level (HAL) from videos of workers performing 50 industrial tasks. The average DC difference between manual frame-by-frame analysis and the computer vision DC was -5.8% for the Decision Tree (DT) algorithm, and 1.4% for the Feature Vector Training (FVT) algorithm. The average HAL difference was 0.5 for the DT algorithm and 0.3 for the FVT algorithm. A sensitivity analysis, conducted to examine the influence that deviations in DC have on HAL, found it remained unaffected when DC error was less than 5%. Thus, a DC error less than 10% will impact HAL less than 0.5 HAL, which is negligible. Automatic computer vision HAL estimates were therefore comparable to manual frame-by-frame estimates. Practitioner Summary: Computer vision was used to automatically estimate exertion time, duty cycle and hand activity level from videos of workers performing industrial tasks.

  2. Evaluation of surveillance methods for monitoring house fly abundance and activity on large commercial dairy operations.

    PubMed

    Gerry, Alec C; Higginbotham, G E; Periera, L N; Lam, A; Shelton, C R

    2011-06-01

    Relative house fly, Musca domestica L., activity at three large dairies in central California was monitored during the peak fly activity period from June to August 2005 by using spot cards, fly tapes, bait traps, and Alsynite traps. Counts for all monitoring methods were significantly related at two of three dairies; with spot card counts significantly related to fly tape counts recorded the same week, and both spot card counts and fly tape counts significantly related to bait trap counts 1-2 wk later. Mean fly counts differed significantly between dairies, but a significant interaction between dairies sampled and monitoring methods used demonstrates that between-dairy comparisons are unwise. Estimate precision was determined by the coefficient of variability (CV) (or SE/mean). Using a CV = 0.15 as a desired level of estimate precision and assuming an integrate pest management (IPM) action threshold near the peak house fly activity measured by each monitoring method, house fly monitoring at a large dairy would require 12 spot cards placed in midafternoon shaded fly resting sites near cattle or seven bait traps placed in open areas near cattle. Software (FlySpotter; http://ucanr.org/ sites/FlySpotter/download/) using computer vision technology was developed to count fly spots on a scanned image of a spot card to dramatically reduce time invested in monitoring house flies. Counts provided by the FlySpotter software were highly correlated to visual counts. The use of spot cards for monitoring house flies is recommended for dairy IPM programs.

  3. Reconfigurable vision system for real-time applications

    NASA Astrophysics Data System (ADS)

    Torres-Huitzil, Cesar; Arias-Estrada, Miguel

    2002-03-01

    Recently, a growing community of researchers has used reconfigurable systems to solve computationally intensive problems. Reconfigurability provides optimized processors for systems on chip designs, and makes easy to import technology to a new system through reusable modules. The main objective of this work is the investigation of a reconfigurable computer system targeted for computer vision and real-time applications. The system is intended to circumvent the inherent computational load of most window-based computer vision algorithms. It aims to build a system for such tasks by providing an FPGA-based hardware architecture for task specific vision applications with enough processing power, using the minimum amount of hardware resources as possible, and a mechanism for building systems using this architecture. Regarding the software part of the system, a library of pre-designed and general-purpose modules that implement common window-based computer vision operations is being investigated. A common generic interface is established for these modules in order to define hardware/software components. These components can be interconnected to develop more complex applications, providing an efficient mechanism for transferring image and result data among modules. Some preliminary results are presented and discussed.

  4. Noisy Spiking in Visual Area V2 of Amblyopic Monkeys.

    PubMed

    Wang, Ye; Zhang, Bin; Tao, Xiaofeng; Wensveen, Janice M; Smith, Earl L; Chino, Yuzo M

    2017-01-25

    Interocular decorrelation of input signals in developing visual cortex can cause impaired binocular vision and amblyopia. Although increased intrinsic noise is thought to be responsible for a range of perceptual deficits in amblyopic humans, the neural basis for the elevated perceptual noise in amblyopic primates is not known. Here, we tested the idea that perceptual noise is linked to the neuronal spiking noise (variability) resulting from developmental alterations in cortical circuitry. To assess spiking noise, we analyzed the contrast-dependent dynamics of spike counts and spiking irregularity by calculating the square of the coefficient of variation in interspike intervals (CV 2 ) and the trial-to-trial fluctuations in spiking, or mean matched Fano factor (m-FF) in visual area V2 of monkeys reared with chronic monocular defocus. In amblyopic neurons, the contrast versus response functions and the spike count dynamics exhibited significant deviations from comparable data for normal monkeys. The CV 2 was pronounced in amblyopic neurons for high-contrast stimuli and the m-FF was abnormally high in amblyopic neurons for low-contrast gratings. The spike count, CV 2 , and m-FF of spontaneous activity were also elevated in amblyopic neurons. These contrast-dependent spiking irregularities were correlated with the level of binocular suppression in these V2 neurons and with the severity of perceptual loss for individual monkeys. Our results suggest that the developmental alterations in normalization mechanisms resulting from early binocular suppression can explain much of these contrast-dependent spiking abnormalities in V2 neurons and the perceptual performance of our amblyopic monkeys. Amblyopia is a common developmental vision disorder in humans. Despite the extensive animal studies on how amblyopia emerges, we know surprisingly little about the neural basis of amblyopia in humans and nonhuman primates. Although the vision of amblyopic humans is often described as being noisy by perceptual and modeling studies, the exact nature or origin of this elevated perceptual noise is not known. We show that elevated and noisy spontaneous activity and contrast-dependent noisy spiking (spiking irregularity and trial-to-trial fluctuations in spiking) in neurons of visual area V2 could limit the visual performance of amblyopic primates. Moreover, we discovered that the noisy spiking is linked to a high level of binocular suppression in visual cortex during development. Copyright © 2017 the authors 0270-6474/17/370922-14$15.00/0.

  5. Feasibility Study of a Vision-Based Landing System for Unmanned Fixed-Wing Aircraft

    DTIC Science & Technology

    2017-06-01

    International Journal of Computer Science and Network Security 7 no. 3: 112–117. Accessed April 7, 2017. http://www.sciencedirect.com/science/ article /pii...the feasibility of applying computer vision techniques and visual feedback in the control loop for an autonomous system. This thesis examines the...integration into an autonomous aircraft control system. 14. SUBJECT TERMS autonomous systems, auto-land, computer vision, image processing

  6. Surpassing Humans and Computers with JellyBean: Crowd-Vision-Hybrid Counting Algorithms.

    PubMed

    Sarma, Akash Das; Jain, Ayush; Nandi, Arnab; Parameswaran, Aditya; Widom, Jennifer

    2015-11-01

    Counting objects is a fundamental image processisng primitive, and has many scientific, health, surveillance, security, and military applications. Existing supervised computer vision techniques typically require large quantities of labeled training data, and even with that, fail to return accurate results in all but the most stylized settings. Using vanilla crowd-sourcing, on the other hand, can lead to significant errors, especially on images with many objects. In this paper, we present our JellyBean suite of algorithms, that combines the best of crowds and computer vision to count objects in images, and uses judicious decomposition of images to greatly improve accuracy at low cost. Our algorithms have several desirable properties: (i) they are theoretically optimal or near-optimal , in that they ask as few questions as possible to humans (under certain intuitively reasonable assumptions that we justify in our paper experimentally); (ii) they operate under stand-alone or hybrid modes, in that they can either work independent of computer vision algorithms, or work in concert with them, depending on whether the computer vision techniques are available or useful for the given setting; (iii) they perform very well in practice, returning accurate counts on images that no individual worker or computer vision algorithm can count correctly, while not incurring a high cost.

  7. Biological Basis For Computer Vision: Some Perspectives

    NASA Astrophysics Data System (ADS)

    Gupta, Madan M.

    1990-03-01

    Using biology as a basis for the development of sensors, devices and computer vision systems is a challenge to systems and vision scientists. It is also a field of promising research for engineering applications. Biological sensory systems, such as vision, touch and hearing, sense different physical phenomena from our environment, yet they possess some common mathematical functions. These mathematical functions are cast into the neural layers which are distributed throughout our sensory regions, sensory information transmission channels and in the cortex, the centre of perception. In this paper, we are concerned with the study of the biological vision system and the emulation of some of its mathematical functions, both retinal and visual cortex, for the development of a robust computer vision system. This field of research is not only intriguing, but offers a great challenge to systems scientists in the development of functional algorithms. These functional algorithms can be generalized for further studies in such fields as signal processing, control systems and image processing. Our studies are heavily dependent on the the use of fuzzy - neural layers and generalized receptive fields. Building blocks of such neural layers and receptive fields may lead to the design of better sensors and better computer vision systems. It is hoped that these studies will lead to the development of better artificial vision systems with various applications to vision prosthesis for the blind, robotic vision, medical imaging, medical sensors, industrial automation, remote sensing, space stations and ocean exploration.

  8. Dynamic Vision for Control

    DTIC Science & Technology

    2006-07-27

    unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT The goal of this project was to develop analytical and computational tools to make vision a Viable sensor for...vision.ucla. edu July 27, 2006 Abstract The goal of this project was to develop analytical and computational tools to make vision a viable sensor for the ... sensors . We have proposed the framework of stereoscopic segmentation where multiple images of the same obejcts were jointly processed to extract geometry

  9. Computer vision camera with embedded FPGA processing

    NASA Astrophysics Data System (ADS)

    Lecerf, Antoine; Ouellet, Denis; Arias-Estrada, Miguel

    2000-03-01

    Traditional computer vision is based on a camera-computer system in which the image understanding algorithms are embedded in the computer. To circumvent the computational load of vision algorithms, low-level processing and imaging hardware can be integrated in a single compact module where a dedicated architecture is implemented. This paper presents a Computer Vision Camera based on an open architecture implemented in an FPGA. The system is targeted to real-time computer vision tasks where low level processing and feature extraction tasks can be implemented in the FPGA device. The camera integrates a CMOS image sensor, an FPGA device, two memory banks, and an embedded PC for communication and control tasks. The FPGA device is a medium size one equivalent to 25,000 logic gates. The device is connected to two high speed memory banks, an IS interface, and an imager interface. The camera can be accessed for architecture programming, data transfer, and control through an Ethernet link from a remote computer. A hardware architecture can be defined in a Hardware Description Language (like VHDL), simulated and synthesized into digital structures that can be programmed into the FPGA and tested on the camera. The architecture of a classical multi-scale edge detection algorithm based on a Laplacian of Gaussian convolution has been developed to show the capabilities of the system.

  10. Research on three-dimensional reconstruction method based on binocular vision

    NASA Astrophysics Data System (ADS)

    Li, Jinlin; Wang, Zhihui; Wang, Minjun

    2018-03-01

    As the hot and difficult issue in computer vision, binocular stereo vision is an important form of computer vision,which has a broad application prospects in many computer vision fields,such as aerial mapping,vision navigation,motion analysis and industrial inspection etc.In this paper, a research is done into binocular stereo camera calibration, image feature extraction and stereo matching. In the binocular stereo camera calibration module, the internal parameters of a single camera are obtained by using the checkerboard lattice of zhang zhengyou the field of image feature extraction and stereo matching, adopted the SURF operator in the local feature operator and the SGBM algorithm in the global matching algorithm are used respectively, and the performance are compared. After completed the feature points matching, we can build the corresponding between matching points and the 3D object points using the camera parameters which are calibrated, which means the 3D information.

  11. Machine learning and computer vision approaches for phenotypic profiling.

    PubMed

    Grys, Ben T; Lo, Dara S; Sahin, Nil; Kraus, Oren Z; Morris, Quaid; Boone, Charles; Andrews, Brenda J

    2017-01-02

    With recent advances in high-throughput, automated microscopy, there has been an increased demand for effective computational strategies to analyze large-scale, image-based data. To this end, computer vision approaches have been applied to cell segmentation and feature extraction, whereas machine-learning approaches have been developed to aid in phenotypic classification and clustering of data acquired from biological images. Here, we provide an overview of the commonly used computer vision and machine-learning methods for generating and categorizing phenotypic profiles, highlighting the general biological utility of each approach. © 2017 Grys et al.

  12. Machine learning and computer vision approaches for phenotypic profiling

    PubMed Central

    Morris, Quaid

    2017-01-01

    With recent advances in high-throughput, automated microscopy, there has been an increased demand for effective computational strategies to analyze large-scale, image-based data. To this end, computer vision approaches have been applied to cell segmentation and feature extraction, whereas machine-learning approaches have been developed to aid in phenotypic classification and clustering of data acquired from biological images. Here, we provide an overview of the commonly used computer vision and machine-learning methods for generating and categorizing phenotypic profiles, highlighting the general biological utility of each approach. PMID:27940887

  13. Possible Computer Vision Systems and Automated or Computer-Aided Edging and Trimming

    Treesearch

    Philip A. Araman

    1990-01-01

    This paper discusses research which is underway to help our industry reduce costs, increase product volume and value recovery, and market more accurately graded and described products. The research is part of a team effort to help the hardwood sawmill industry automate with computer vision systems, and computer-aided or computer controlled processing. This paper...

  14. Review of Occupational Therapy Research in the Practice Area of Children and Youth

    PubMed Central

    Bendixen, Roxanna M.; Kreider, Consuelo M.

    2011-01-01

    A systematic review was conducted focusing on articles in the Occupational Therapy (OT) practice category of Childhood and Youth (C&Y) published in the American Journal of Occupational Therapy (AJOT) over the two-year period of 2009–2010. The frameworks of the International Classification of Functioning, Disability and Health (ICF) and Positive Youth Development (PYD) were used to explore OT research progress toward the goals of the Centennial Vision (CV). Forty-six research articles were organized by research type and were classified within these two frameworks. The majority of reviewed published research investigated variables representing constructs falling within the ICF domains of Body Functioning and Activity. The effect of OT interventions on PYD resided primarily in building competence. In order to meet the tenets of the CV, OTs must document changes in children’s engagement in everyday life situations and build the evidence of OT’s efficacy in facilitating participation. PMID:21675342

  15. Smartphone, tablet computer and e-reader use by people with vision impairment.

    PubMed

    Crossland, Michael D; Silva, Rui S; Macedo, Antonio F

    2014-09-01

    Consumer electronic devices such as smartphones, tablet computers, and e-book readers have become far more widely used in recent years. Many of these devices contain accessibility features such as large print and speech. Anecdotal experience suggests people with vision impairment frequently make use of these systems. Here we survey people with self-identified vision impairment to determine their use of this equipment. An internet-based survey was advertised to people with vision impairment by word of mouth, social media, and online. Respondents were asked demographic information, what devices they owned, what they used these devices for, and what accessibility features they used. One hundred and thirty-two complete responses were received. Twenty-six percent of the sample reported that they had no vision and the remainder reported they had low vision. One hundred and seven people (81%) reported using a smartphone. Those with no vision were as likely to use a smartphone or tablet as those with low vision. Speech was found useful by 59% of smartphone users. Fifty-one percent of smartphone owners used the camera and screen as a magnifier. Forty-eight percent of the sample used a tablet computer, and 17% used an e-book reader. The most frequently cited reason for not using these devices included cost and lack of interest. Smartphones, tablet computers, and e-book readers can be used by people with vision impairment. Speech is used by people with low vision as well as those with no vision. Many of our (self-selected) group used their smartphone camera and screen as a magnifier, and others used the camera flash as a spotlight. © 2014 The Authors Ophthalmic & Physiological Optics © 2014 The College of Optometrists.

  16. Machine vision for real time orbital operations

    NASA Technical Reports Server (NTRS)

    Vinz, Frank L.

    1988-01-01

    Machine vision for automation and robotic operation of Space Station era systems has the potential for increasing the efficiency of orbital servicing, repair, assembly and docking tasks. A machine vision research project is described in which a TV camera is used for inputing visual data to a computer so that image processing may be achieved for real time control of these orbital operations. A technique has resulted from this research which reduces computer memory requirements and greatly increases typical computational speed such that it has the potential for development into a real time orbital machine vision system. This technique is called AI BOSS (Analysis of Images by Box Scan and Syntax).

  17. Development of a Wireless Computer Vision Instrument to Detect Biotic Stress in Wheat

    PubMed Central

    Casanova, Joaquin J.; O'Shaughnessy, Susan A.; Evett, Steven R.; Rush, Charles M.

    2014-01-01

    Knowledge of crop abiotic and biotic stress is important for optimal irrigation management. While spectral reflectance and infrared thermometry provide a means to quantify crop stress remotely, these measurements can be cumbersome. Computer vision offers an inexpensive way to remotely detect crop stress independent of vegetation cover. This paper presents a technique using computer vision to detect disease stress in wheat. Digital images of differentially stressed wheat were segmented into soil and vegetation pixels using expectation maximization (EM). In the first season, the algorithm to segment vegetation from soil and distinguish between healthy and stressed wheat was developed and tested using digital images taken in the field and later processed on a desktop computer. In the second season, a wireless camera with near real-time computer vision capabilities was tested in conjunction with the conventional camera and desktop computer. For wheat irrigated at different levels and inoculated with wheat streak mosaic virus (WSMV), vegetation hue determined by the EM algorithm showed significant effects from irrigation level and infection. Unstressed wheat had a higher hue (118.32) than stressed wheat (111.34). In the second season, the hue and cover measured by the wireless computer vision sensor showed significant effects from infection (p = 0.0014), as did the conventional camera (p < 0.0001). Vegetation hue obtained through a wireless computer vision system in this study is a viable option for determining biotic crop stress in irrigation scheduling. Such a low-cost system could be suitable for use in the field in automated irrigation scheduling applications. PMID:25251410

  18. Development of Open source-based automatic shooting and processing UAV imagery for Orthoimage Using Smart Camera UAV

    NASA Astrophysics Data System (ADS)

    Park, J. W.; Jeong, H. H.; Kim, J. S.; Choi, C. U.

    2016-06-01

    Recently, aerial photography with unmanned aerial vehicle (UAV) system uses UAV and remote controls through connections of ground control system using bandwidth of about 430 MHz radio Frequency (RF) modem. However, as mentioned earlier, existing method of using RF modem has limitations in long distance communication. The Smart Camera equipments's LTE (long-term evolution), Bluetooth, and Wi-Fi to implement UAV that uses developed UAV communication module system carried out the close aerial photogrammetry with the automatic shooting. Automatic shooting system is an image capturing device for the drones in the area's that needs image capturing and software for loading a smart camera and managing it. This system is composed of automatic shooting using the sensor of smart camera and shooting catalog management which manages filmed images and information. Processing UAV imagery module used Open Drone Map. This study examined the feasibility of using the Smart Camera as the payload for a photogrammetric UAV system. The open soure tools used for generating Android, OpenCV (Open Computer Vision), RTKLIB, Open Drone Map.

  19. Computer vision-based analysis of foods: a non-destructive colour measurement tool to monitor quality and safety.

    PubMed

    Mogol, Burçe Ataç; Gökmen, Vural

    2014-05-01

    Computer vision-based image analysis has been widely used in food industry to monitor food quality. It allows low-cost and non-contact measurements of colour to be performed. In this paper, two computer vision-based image analysis approaches are discussed to extract mean colour or featured colour information from the digital images of foods. These types of information may be of particular importance as colour indicates certain chemical changes or physical properties in foods. As exemplified here, the mean CIE a* value or browning ratio determined by means of computer vision-based image analysis algorithms can be correlated with acrylamide content of potato chips or cookies. Or, porosity index as an important physical property of breadcrumb can be calculated easily. In this respect, computer vision-based image analysis provides a useful tool for automatic inspection of food products in a manufacturing line, and it can be actively involved in the decision-making process where rapid quality/safety evaluation is needed. © 2013 Society of Chemical Industry.

  20. Deep Neural Networks: A New Framework for Modeling Biological Vision and Brain Information Processing.

    PubMed

    Kriegeskorte, Nikolaus

    2015-11-24

    Recent advances in neural network modeling have enabled major strides in computer vision and other artificial intelligence applications. Human-level visual recognition abilities are coming within reach of artificial systems. Artificial neural networks are inspired by the brain, and their computations could be implemented in biological neurons. Convolutional feedforward networks, which now dominate computer vision, take further inspiration from the architecture of the primate visual hierarchy. However, the current models are designed with engineering goals, not to model brain computations. Nevertheless, initial studies comparing internal representations between these models and primate brains find surprisingly similar representational spaces. With human-level performance no longer out of reach, we are entering an exciting new era, in which we will be able to build biologically faithful feedforward and recurrent computational models of how biological brains perform high-level feats of intelligence, including vision.

  1. Image analysis of representative food structures: application of the bootstrap method.

    PubMed

    Ramírez, Cristian; Germain, Juan C; Aguilera, José M

    2009-08-01

    Images (for example, photomicrographs) are routinely used as qualitative evidence of the microstructure of foods. In quantitative image analysis it is important to estimate the area (or volume) to be sampled, the field of view, and the resolution. The bootstrap method is proposed to estimate the size of the sampling area as a function of the coefficient of variation (CV(Bn)) and standard error (SE(Bn)) of the bootstrap taking sub-areas of different sizes. The bootstrap method was applied to simulated and real structures (apple tissue). For simulated structures, 10 computer-generated images were constructed containing 225 black circles (elements) and different coefficient of variation (CV(image)). For apple tissue, 8 images of apple tissue containing cellular cavities with different CV(image) were analyzed. Results confirmed that for simulated and real structures, increasing the size of the sampling area decreased the CV(Bn) and SE(Bn). Furthermore, there was a linear relationship between the CV(image) and CV(Bn) (.) For example, to obtain a CV(Bn) = 0.10 in an image with CV(image) = 0.60, a sampling area of 400 x 400 pixels (11% of whole image) was required, whereas if CV(image) = 1.46, a sampling area of 1000 x 100 pixels (69% of whole image) became necessary. This suggests that a large-size dispersion of element sizes in an image requires increasingly larger sampling areas or a larger number of images.

  2. Job-shop scheduling applied to computer vision

    NASA Astrophysics Data System (ADS)

    Sebastian y Zuniga, Jose M.; Torres-Medina, Fernando; Aracil, Rafael; Reinoso, Oscar; Jimenez, Luis M.; Garcia, David

    1997-09-01

    This paper presents a method for minimizing the total elapsed time spent by n tasks running on m differents processors working in parallel. The developed algorithm not only minimizes the total elapsed time but also reduces the idle time and waiting time of in-process tasks. This condition is very important in some applications of computer vision in which the time to finish the total process is particularly critical -- quality control in industrial inspection, real- time computer vision, guided robots. The scheduling algorithm is based on the use of two matrices, obtained from the precedence relationships between tasks, and the data obtained from the two matrices. The developed scheduling algorithm has been tested in one application of quality control using computer vision. The results obtained have been satisfactory in the application of different image processing algorithms.

  3. Development of the method of aggregation to determine the current storage area using computer vision and radiofrequency identification

    NASA Astrophysics Data System (ADS)

    Astafiev, A.; Orlov, A.; Privezencev, D.

    2018-01-01

    The article is devoted to the development of technology and software for the construction of positioning and control systems in industrial plants based on aggregation to determine the current storage area using computer vision and radiofrequency identification. It describes the developed of the project of hardware for industrial products positioning system in the territory of a plant on the basis of radio-frequency grid. It describes the development of the project of hardware for industrial products positioning system in the plant on the basis of computer vision methods. It describes the development of the method of aggregation to determine the current storage area using computer vision and radiofrequency identification. Experimental studies in laboratory and production conditions have been conducted and described in the article.

  4. Enhanced computer vision with Microsoft Kinect sensor: a review.

    PubMed

    Han, Jungong; Shao, Ling; Xu, Dong; Shotton, Jamie

    2013-10-01

    With the invention of the low-cost Microsoft Kinect sensor, high-resolution depth and visual (RGB) sensing has become available for widespread use. The complementary nature of the depth and visual information provided by the Kinect sensor opens up new opportunities to solve fundamental problems in computer vision. This paper presents a comprehensive review of recent Kinect-based computer vision algorithms and applications. The reviewed approaches are classified according to the type of vision problems that can be addressed or enhanced by means of the Kinect sensor. The covered topics include preprocessing, object tracking and recognition, human activity analysis, hand gesture analysis, and indoor 3-D mapping. For each category of methods, we outline their main algorithmic contributions and summarize their advantages/differences compared to their RGB counterparts. Finally, we give an overview of the challenges in this field and future research trends. This paper is expected to serve as a tutorial and source of references for Kinect-based computer vision researchers.

  5. Texture and art with deep neural networks.

    PubMed

    Gatys, Leon A; Ecker, Alexander S; Bethge, Matthias

    2017-10-01

    Although the study of biological vision and computer vision attempt to understand powerful visual information processing from different angles, they have a long history of informing each other. Recent advances in texture synthesis that were motivated by visual neuroscience have led to a substantial advance in image synthesis and manipulation in computer vision using convolutional neural networks (CNNs). Here, we review these recent advances and discuss how they can in turn inspire new research in visual perception and computational neuroscience. Copyright © 2017. Published by Elsevier Ltd.

  6. Computer-mediated communication and time pressure induce higher cardiovascular responses in the preparatory and execution phases of cooperative tasks.

    PubMed

    Costa Ferrer, Raquel; Serrano Rosa, Miguel Ángel; Zornoza Abad, Ana; Salvador Fernández-Montejo, Alicia

    2010-11-01

    The cardiovascular (CV) response to social challenge and stress is associated with the etiology of cardiovascular diseases. New ways of communication, time pressure and different types of information are common in our society. In this study, the cardiovascular response to two different tasks (open vs. closed information) was examined employing different communication channels (computer-mediated vs. face-to-face) and with different pace control (self vs. external). Our results indicate that there was a higher CV response in the computer-mediated condition, on the closed information task and in the externally paced condition. These role of these factors should be considered when studying the consequences of social stress and their underlying mechanisms.

  7. Open source acceleration of wave optics simulations on energy efficient high-performance computing platforms

    NASA Astrophysics Data System (ADS)

    Beck, Jeffrey; Bos, Jeremy P.

    2017-05-01

    We compare several modifications to the open-source wave optics package, WavePy, intended to improve execution time. Specifically, we compare the relative performance of the Intel MKL, a CPU based OpenCV distribution, and GPU-based version. Performance is compared between distributions both on the same compute platform and between a fully-featured computing workstation and the NVIDIA Jetson TX1 platform. Comparisons are drawn in terms of both execution time and power consumption. We have found that substituting the Fast Fourier Transform operation from OpenCV provides a marked improvement on all platforms. In addition, we show that embedded platforms offer some possibility for extensive improvement in terms of efficiency compared to a fully featured workstation.

  8. Impact of computer use on children's vision.

    PubMed

    Kozeis, N

    2009-10-01

    Today, millions of children use computers on a daily basis. Extensive viewing of the computer screen can lead to eye discomfort, fatigue, blurred vision and headaches, dry eyes and other symptoms of eyestrain. These symptoms may be caused by poor lighting, glare, an improper work station set-up, vision problems of which the person was not previously aware, or a combination of these factors. Children can experience many of the same symptoms related to computer use as adults. However, some unique aspects of how children use computers may make them more susceptible than adults to the development of these problems. In this study, the most common eye symptoms related to computer use in childhood, the possible causes and ways to avoid them are reviewed.

  9. Application of computational fluid dynamics (CFD) simulation in a vertical axis wind turbine (VAWT) system

    NASA Astrophysics Data System (ADS)

    Kao, Jui-Hsiang; Tseng, Po-Yuan

    2018-01-01

    The objective of this paper is to describe the application of CFD (Computational fluid dynamics) technology in the matching of turbine blades and generator to increase the efficiency of a vertical axis wind turbine (VAWT). A VAWT is treated as the study case here. The SST (Shear-Stress Transport) k-ω turbulence model with SIMPLE algorithm method in transient state is applied to solve the T (torque)-N (r/min) curves of the turbine blades at different wind speed. The T-N curves of the generator at different CV (constant voltage) model are measured. Thus, the T-N curves of the turbine blades at different wind speed can be matched by the T-N curves of the generator at different CV model to find the optimal CV model. As the optimal CV mode is selected, the characteristics of the operating points, such as tip speed ratio, revolutions per minute, blade torque, and efficiency, can be identified. The results show that, if the two systems are matched well, the final output power at a high wind speed of 9-10 m/s will be increased by 15%.

  10. Operational Assessment of Color Vision

    DTIC Science & Technology

    2016-06-20

    evaluated in this study. 15. SUBJECT TERMS Color vision, aviation, cone contrast test, Colour Assessment & Diagnosis , color Dx, OBVA 16. SECURITY...symbologies are frequently used to aid or direct critical activities such as aircraft landing approaches or railroad right-of-way designations...computer-generated display systems have facilitated the development of computer-based, automated tests of color vision [14,15]. The United Kingdom’s

  11. Neo-Symbiosis: The Next Stage in the Evolution of Human Information Interaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Griffith, Douglas; Greitzer, Frank L.

    We re-address the vision of human-computer symbiosis expressed by J. C. R. Licklider nearly a half-century ago, when he wrote: “The hope is that in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.” (Licklider, 1960). Unfortunately, little progress was made toward this vision over four decades following Licklider’s challenge, despite significant advancements in the fields of human factors and computer science. Licklider’s vision wasmore » largely forgotten. However, recent advances in information science and technology, psychology, and neuroscience have rekindled the potential of making the Licklider’s vision a reality. This paper provides a historical context for and updates the vision, and it argues that such a vision is needed as a unifying framework for advancing IS&T.« less

  12. A conduction velocity adapted eikonal model for electrophysiology problems with re-excitability evaluation.

    PubMed

    Corrado, Cesare; Zemzemi, Nejib

    2018-01-01

    Computational models of heart electrophysiology achieved a considerable interest in the medical community as they represent a novel framework for the study of the mechanisms underpinning heart pathologies. The high demand of computational resources and the long computational time required to evaluate the model solution hamper the use of detailed computational models in clinical applications. In this paper, we present a multi-front eikonal algorithm that adapts the conduction velocity (CV) to the activation frequency of the tissue substrate. We then couple the eikonal new algorithm with the Mitchell-Schaeffer (MS) ionic model to determine the tissue electrical state. Compared to the standard eikonal model, this model introduces three novelties: first, it evaluates the local value of the transmembrane potential and of the ionic variable solving an ionic model; second, it computes the action potential duration (APD) and the diastolic interval (DI) from the solution of the MS model and uses them to determine if the tissue is locally re-excitable; third, it adapts the CV to the underpinning electrophysiological state through an analytical expression of the CV restitution and the computed local DI. We conduct series of simulations on a 3D tissue slab and on a realistic heart geometry and compare the solutions with those obtained solving the monodomain equation. Our results show that the new model is significantly more accurate than the standard eikonal model. The proposed model enables the numerical simulation of the heart electrophysiology on a clinical time scale and thus constitutes a viable model candidate for computer-guided radio-frequency ablation. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Computer Vision Syndrome: Implications for the Occupational Health Nurse.

    PubMed

    Lurati, Ann Regina

    2018-02-01

    Computers and other digital devices are commonly used both in the workplace and during leisure time. Computer vision syndrome (CVS) is a new health-related condition that negatively affects workers. This article reviews the pathology of and interventions for CVS with implications for the occupational health nurse.

  14. A multidisciplinary approach to solving computer related vision problems.

    PubMed

    Long, Jennifer; Helland, Magne

    2012-09-01

    This paper proposes a multidisciplinary approach to solving computer related vision issues by including optometry as a part of the problem-solving team. Computer workstation design is increasing in complexity. There are at least ten different professions who contribute to workstation design or who provide advice to improve worker comfort, safety and efficiency. Optometrists have a role identifying and solving computer-related vision issues and in prescribing appropriate optical devices. However, it is possible that advice given by optometrists to improve visual comfort may conflict with other requirements and demands within the workplace. A multidisciplinary approach has been advocated for solving computer related vision issues. There are opportunities for optometrists to collaborate with ergonomists, who coordinate information from physical, cognitive and organisational disciplines to enact holistic solutions to problems. This paper proposes a model of collaboration and examples of successful partnerships at a number of professional levels including individual relationships between optometrists and ergonomists when they have mutual clients/patients, in undergraduate and postgraduate education and in research. There is also scope for dialogue between optometry and ergonomics professional associations. A multidisciplinary approach offers the opportunity to solve vision related computer issues in a cohesive, rather than fragmented way. Further exploration is required to understand the barriers to these professional relationships. © 2012 The College of Optometrists.

  15. How do we choose the best model? The impact of cross-validation design on model evaluation for buried threat detection in ground penetrating radar

    NASA Astrophysics Data System (ADS)

    Malof, Jordan M.; Reichman, Daniël.; Collins, Leslie M.

    2018-04-01

    A great deal of research has been focused on the development of computer algorithms for buried threat detection (BTD) in ground penetrating radar (GPR) data. Most recently proposed BTD algorithms are supervised, and therefore they employ machine learning models that infer their parameters using training data. Cross-validation (CV) is a popular method for evaluating the performance of such algorithms, in which the available data is systematically split into ܰ disjoint subsets, and an algorithm is repeatedly trained on ܰ-1 subsets and tested on the excluded subset. There are several common types of CV in BTD, which vary principally upon the spatial criterion used to partition the data: site-based, lane-based, region-based, etc. The performance metrics obtained via CV are often used to suggest the superiority of one model over others, however, most studies utilize just one type of CV, and the impact of this choice is unclear. Here we employ several types of CV to evaluate algorithms from a recent large-scale BTD study. The results indicate that the rank-order of the performance of the algorithms varies substantially depending upon which type of CV is used. For example, the rank-1 algorithm for region-based CV is the lowest ranked algorithm for site-based CV. This suggests that any algorithm results should be interpreted carefully with respect to the type of CV employed. We discuss some potential interpretations of performance, given a particular type of CV.

  16. Object Tracking Vision System for Mapping the UCN τ Apparatus Volume

    NASA Astrophysics Data System (ADS)

    Lumb, Rowan; UCNtau Collaboration

    2016-09-01

    The UCN τ collaboration has an immediate goal to measure the lifetime of the free neutron to within 0.1%, i.e. about 1 s. The UCN τ apparatus is a magneto-gravitational ``bottle'' system. This system holds low energy, or ultracold, neutrons in the apparatus with the constraint of gravity, and keeps these low energy neutrons from interacting with the bottle via a strong 1 T surface magnetic field created by a bowl-shaped array of permanent magnets. The apparatus is wrapped with energized coils to supply a magnetic field throughout the ''bottle'' volume to prevent depolarization of the neutrons. An object-tracking stereo-vision system will be presented that precisely tracks a Hall probe and allows a mapping of the magnetic field throughout the volume of the UCN τ bottle. The stereo-vision system utilizes two cameras and open source openCV software to track an object's 3-d position in space in real time. The desired resolution is +/-1 mm resolution along each axis. The vision system is being used as part of an even larger system to map the magnetic field of the UCN τ apparatus and expose any possible systematic effects due to field cancellation or low field points which could allow neutrons to depolarize and possibly escape from the apparatus undetected. Tennessee Technological University.

  17. Cloud Offload in Hostile Environments

    DTIC Science & Technology

    2011-12-01

    of recognized objects in an input image. FACE: Windows XP C++ application based on the OpenCV library [45]. It returns the coordinates and identities...SOLDIER. Energy-Efficient Technolo- gies for the Dismounted Soldier”. National Research Council, 1997. [16] COMMITTEE ON SOLDIER POWER/ENERGY SYSTEMS...vol. 4658 of Lecture Notes in Computer Science. Springer Berlin / Heidelberg, 2007. [45] OPENCV . OpenCV Wiki. http://opencv.willowgarage.com/wiki/. [46

  18. A Modeling Study of the Effects of Vocal Tract Movement Duration and Magnitude on the F2 Trajectory in CV Words

    ERIC Educational Resources Information Center

    Neely, Kimberly D.; Bunton, Kate; Story, Brad H.

    2016-01-01

    Purpose: This study used a computational vocal tract model to investigate the relationship of diphthong duration and vocal tract movement magnitude to measures of the F2 trajectory in CV words. Method: Three words ("bough," "boy," and "buy") were simulated on the basis of an adult female vocal tract model, in which…

  19. Computer vision

    NASA Technical Reports Server (NTRS)

    Gennery, D.; Cunningham, R.; Saund, E.; High, J.; Ruoff, C.

    1981-01-01

    The field of computer vision is surveyed and assessed, key research issues are identified, and possibilities for a future vision system are discussed. The problems of descriptions of two and three dimensional worlds are discussed. The representation of such features as texture, edges, curves, and corners are detailed. Recognition methods are described in which cross correlation coefficients are maximized or numerical values for a set of features are measured. Object tracking is discussed in terms of the robust matching algorithms that must be devised. Stereo vision, camera control and calibration, and the hardware and systems architecture are discussed.

  20. Comparative randomised active drug controlled clinical trial of a herbal eye drop in computer vision syndrome.

    PubMed

    Chatterjee, Pranab Kr; Bairagi, Debasis; Roy, Sudipta; Majumder, Nilay Kr; Paul, Ratish Ch; Bagchi, Sunil Ch

    2005-07-01

    A comparative double-blind placebo-controlled clinical trial of a herbal eye drop (itone) was conducted to find out its efficacy and safety in 120 patients with computer vision syndrome. Patients using computers for more than 3 hours continuously per day having symptoms of watering, redness, asthenia, irritation, foreign body sensation and signs of conjunctival hyperaemia, corneal filaments and mucus were studied. One hundred and twenty patients were randomly given either placebo, tears substitute (tears plus) or itone in identical vials with specific code number and were instructed to put one drop four times daily for 6 weeks. Subjective and objective assessments were done at bi-weekly intervals. In computer vision syndrome both subjective and objective improvements were noticed with itone drops. Itone drop was found significantly better than placebo (p<0.01) and almost identical results were observed with tears plus (difference was not statistically significant). Itone is considered to be a useful drug in computer vision syndrome.

  1. Integrating Mobile Robotics and Vision with Undergraduate Computer Science

    ERIC Educational Resources Information Center

    Cielniak, G.; Bellotto, N.; Duckett, T.

    2013-01-01

    This paper describes the integration of robotics education into an undergraduate Computer Science curriculum. The proposed approach delivers mobile robotics as well as covering the closely related field of Computer Vision and is directly linked to the research conducted at the authors' institution. The paper describes the most relevant details of…

  2. Parallel computer vision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uhr, L.

    1987-01-01

    This book is written by research scientists involved in the development of massively parallel, but hierarchically structured, algorithms, architectures, and programs for image processing, pattern recognition, and computer vision. The book gives an integrated picture of the programs and algorithms that are being developed, and also of the multi-computer hardware architectures for which these systems are designed.

  3. Rationale, Design and Implementation of a Computer Vision-Based Interactive E-Learning System

    ERIC Educational Resources Information Center

    Xu, Richard Y. D.; Jin, Jesse S.

    2007-01-01

    This article presents a schematic application of computer vision technologies to e-learning that is synchronous, peer-to-peer-based, and supports an instructor's interaction with non-computer teaching equipments. The article first discusses the importance of these focused e-learning areas, where the properties include accurate bidirectional…

  4. Computer Vision Assisted Virtual Reality Calibration

    NASA Technical Reports Server (NTRS)

    Kim, W.

    1999-01-01

    A computer vision assisted semi-automatic virtual reality (VR) calibration technology has been developed that can accurately match a virtual environment of graphically simulated three-dimensional (3-D) models to the video images of the real task environment.

  5. Sensor Control of Robot Arc Welding

    NASA Technical Reports Server (NTRS)

    Sias, F. R., Jr.

    1983-01-01

    The potential for using computer vision as sensory feedback for robot gas-tungsten arc welding is investigated. The basic parameters that must be controlled while directing the movement of an arc welding torch are defined. The actions of a human welder are examined to aid in determining the sensory information that would permit a robot to make reproducible high strength welds. Special constraints imposed by both robot hardware and software are considered. Several sensory modalities that would potentially improve weld quality are examined. Special emphasis is directed to the use of computer vision for controlling gas-tungsten arc welding. Vendors of available automated seam tracking arc welding systems and of computer vision systems are surveyed. An assessment is made of the state of the art and the problems that must be solved in order to apply computer vision to robot controlled arc welding on the Space Shuttle Main Engine.

  6. Tracking by Identification Using Computer Vision and Radio

    PubMed Central

    Mandeljc, Rok; Kovačič, Stanislav; Kristan, Matej; Perš, Janez

    2013-01-01

    We present a novel system for detection, localization and tracking of multiple people, which fuses a multi-view computer vision approach with a radio-based localization system. The proposed fusion combines the best of both worlds, excellent computer-vision-based localization, and strong identity information provided by the radio system, and is therefore able to perform tracking by identification, which makes it impervious to propagated identity switches. We present comprehensive methodology for evaluation of systems that perform person localization in world coordinate system and use it to evaluate the proposed system as well as its components. Experimental results on a challenging indoor dataset, which involves multiple people walking around a realistically cluttered room, confirm that proposed fusion of both systems significantly outperforms its individual components. Compared to the radio-based system, it achieves better localization results, while at the same time it successfully prevents propagation of identity switches that occur in pure computer-vision-based tracking. PMID:23262485

  7. Continuous-Variable Instantaneous Quantum Computing is Hard to Sample.

    PubMed

    Douce, T; Markham, D; Kashefi, E; Diamanti, E; Coudreau, T; Milman, P; van Loock, P; Ferrini, G

    2017-02-17

    Instantaneous quantum computing is a subuniversal quantum complexity class, whose circuits have proven to be hard to simulate classically in the discrete-variable realm. We extend this proof to the continuous-variable (CV) domain by using squeezed states and homodyne detection, and by exploring the properties of postselected circuits. In order to treat postselection in CVs, we consider finitely resolved homodyne detectors, corresponding to a realistic scheme based on discrete probability distributions of the measurement outcomes. The unavoidable errors stemming from the use of finitely squeezed states are suppressed through a qubit-into-oscillator Gottesman-Kitaev-Preskill encoding of quantum information, which was previously shown to enable fault-tolerant CV quantum computation. Finally, we show that, in order to render postselected computational classes in CVs meaningful, a logarithmic scaling of the squeezing parameter with the circuit size is necessary, translating into a polynomial scaling of the input energy.

  8. Toothguide Trainer tests with color vision deficiency simulation monitor.

    PubMed

    Borbély, Judit; Varsányi, Balázs; Fejérdy, Pál; Hermann, Péter; Jakstat, Holger A

    2010-01-01

    The aim of this study was to evaluate whether simulated severe red and green color vision deficiency (CVD) influenced color matching results and to investigate whether training with Toothguide Trainer (TT) computer program enabled better color matching results. A total of 31 color normal dental students participated in the study. Every participant had to pass the Ishihara Test. Participants with a red/green color vision deficiency were excluded. A lecture on tooth color matching was given, and individual training with TT was performed. To measure the individual tooth color matching results in normal and color deficient display modes, the TT final exam was displayed on a calibrated monitor that served as a hardware-based method of simulating protanopy and deuteranopy. Data from the TT final exams were collected in normal and in severe red and green CVD-simulating monitor display modes. Color difference values for each participant in each display mode were computed (∑ΔE(ab)(*)), and the respective means and standard deviations were calculated. The Student's t-test was used in statistical evaluation. Participants made larger ΔE(ab)(*) errors in severe color vision deficient display modes than in the normal monitor mode. TT tests showed significant (p<0.05) difference in the tooth color matching results of severe green color vision deficiency simulation mode compared to normal vision mode. Students' shade matching results were significantly better after training (p=0.009). Computer-simulated severe color vision deficiency mode resulted in significantly worse color matching quality compared to normal color vision mode. Toothguide Trainer computer program improved color matching results. Copyright © 2010 Elsevier Ltd. All rights reserved.

  9. [Meibomian gland disfunction in computer vision syndrome].

    PubMed

    Pimenidi, M K; Polunin, G S; Safonova, T N

    2010-01-01

    This article reviews ethiology and pathogenesis of dry eye syndrome due to meibomian gland disfunction (MDG). It is showed that blink rate influences meibomian gland functioning and computer vision syndrome development. Current diagnosis and treatment options of MDG are presented.

  10. Analog "neuronal" networks in early vision.

    PubMed Central

    Koch, C; Marroquin, J; Yuille, A

    1986-01-01

    Many problems in early vision can be formulated in terms of minimizing a cost function. Examples are shape from shading, edge detection, motion analysis, structure from motion, and surface interpolation. As shown by Poggio and Koch [Poggio, T. & Koch, C. (1985) Proc. R. Soc. London, Ser. B 226, 303-323], quadratic variational problems, an important subset of early vision tasks, can be "solved" by linear, analog electrical, or chemical networks. However, in the presence of discontinuities, the cost function is nonquadratic, raising the question of designing efficient algorithms for computing the optimal solution. Recently, Hopfield and Tank [Hopfield, J. J. & Tank, D. W. (1985) Biol. Cybern. 52, 141-152] have shown that networks of nonlinear analog "neurons" can be effective in computing the solution of optimization problems. We show how these networks can be generalized to solve the nonconvex energy functionals of early vision. We illustrate this approach by implementing a specific analog network, solving the problem of reconstructing a smooth surface from sparse data while preserving its discontinuities. These results suggest a novel computational strategy for solving early vision problems in both biological and real-time artificial vision systems. PMID:3459172

  11. The diagnostic performance of expert dermoscopists vs a computer-vision system on small-diameter melanomas.

    PubMed

    Friedman, Robert J; Gutkowicz-Krusin, Dina; Farber, Michele J; Warycha, Melanie; Schneider-Kels, Lori; Papastathis, Nicole; Mihm, Martin C; Googe, Paul; King, Roy; Prieto, Victor G; Kopf, Alfred W; Polsky, David; Rabinovitz, Harold; Oliviero, Margaret; Cognetta, Armand; Rigel, Darrell S; Marghoob, Ashfaq; Rivers, Jason; Johr, Robert; Grant-Kels, Jane M; Tsao, Hensin

    2008-04-01

    To evaluate the performance of dermoscopists in diagnosing small pigmented skin lesions (diameter

  12. Taming Crowded Visual Scenes

    DTIC Science & Technology

    2014-08-12

    Nolan Warner, Mubarak Shah. Tracking in Dense Crowds Using Prominenceand Neighborhood Motion Concurrence, IEEE Transactions on Pattern Analysis...of  computer  vision,   computer   graphics  and  evacuation  dynamics  by  providing  a  common  platform,  and  provides...areas  that  includes  Computer  Vision,  Computer   Graphics ,  and  Pedestrian   Evacuation  Dynamics.  Despite  the

  13. Computer vision syndrome: a review of ocular causes and potential treatments.

    PubMed

    Rosenfield, Mark

    2011-09-01

    Computer vision syndrome (CVS) is the combination of eye and vision problems associated with the use of computers. In modern western society the use of computers for both vocational and avocational activities is almost universal. However, CVS may have a significant impact not only on visual comfort but also occupational productivity since between 64% and 90% of computer users experience visual symptoms which may include eyestrain, headaches, ocular discomfort, dry eye, diplopia and blurred vision either at near or when looking into the distance after prolonged computer use. This paper reviews the principal ocular causes for this condition, namely oculomotor anomalies and dry eye. Accommodation and vergence responses to electronic screens appear to be similar to those found when viewing printed materials, whereas the prevalence of dry eye symptoms is greater during computer operation. The latter is probably due to a decrease in blink rate and blink amplitude, as well as increased corneal exposure resulting from the monitor frequently being positioned in primary gaze. However, the efficacy of proposed treatments to reduce symptoms of CVS is unproven. A better understanding of the physiology underlying CVS is critical to allow more accurate diagnosis and treatment. This will enable practitioners to optimize visual comfort and efficiency during computer operation. Ophthalmic & Physiological Optics © 2011 The College of Optometrists.

  14. An Enduring Dialogue between Computational and Empirical Vision.

    PubMed

    Martinez-Conde, Susana; Macknik, Stephen L; Heeger, David J

    2018-04-01

    In the late 1970s, key discoveries in neurophysiology, psychophysics, computer vision, and image processing had reached a tipping point that would shape visual science for decades to come. David Marr and Ellen Hildreth's 'Theory of edge detection', published in 1980, set out to integrate the newly available wealth of data from behavioral, physiological, and computational approaches in a unifying theory. Although their work had wide and enduring ramifications, their most important contribution may have been to consolidate the foundations of the ongoing dialogue between theoretical and empirical vision science. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Artificial Neural Network applied to lightning flashes

    NASA Astrophysics Data System (ADS)

    Gin, R. B.; Guedes, D.; Bianchi, R.

    2013-05-01

    The development of video cameras enabled cientists to study lightning discharges comportment with more precision. The main goal of this project is to create a system able to detect images of lightning discharges stored in videos and classify them using an Artificial Neural Network (ANN)using C Language and OpenCV libraries. The developed system, can be split in two different modules: detection module and classification module. The detection module uses OpenCV`s computer vision libraries and image processing techniques to detect if there are significant differences between frames in a sequence, indicating that something, still not classified, occurred. Whenever there is a significant difference between two consecutive frames, two main algorithms are used to analyze the frame image: brightness and shape algorithms. These algorithms detect both shape and brightness of the event, removing irrelevant events like birds, as well as detecting the relevant events exact position, allowing the system to track it over time. The classification module uses a neural network to classify the relevant events as horizontal or vertical lightning, save the event`s images and calculates his number of discharges. The Neural Network was implemented using the backpropagation algorithm, and was trained with 42 training images , containing 57 lightning events (one image can have more than one lightning). TheANN was tested with one to five hidden layers, with up to 50 neurons each. The best configuration achieved a success rate of 95%, with one layer containing 20 neurons (33 test images with 42 events were used in this phase). This configuration was implemented in the developed system to analyze 20 video files, containing 63 lightning discharges previously manually detected. Results showed that all the lightning discharges were detected, many irrelevant events were unconsidered, and the event's number of discharges was correctly computed. The neural network used in this project achieved a success rate of 90%. The videos used in this experiment were acquired by seven video cameras installed in São Bernardo do Campo, Brazil, that continuously recorded lightning events during the summer. The cameras were disposed in a 360 loop, recording all data at a time resolution of 33ms. During this period, several convective storms were recorded.

  16. Low computation vision-based navigation for a Martian rover

    NASA Technical Reports Server (NTRS)

    Gavin, Andrew S.; Brooks, Rodney A.

    1994-01-01

    Construction and design details of the Mobot Vision System, a small, self-contained, mobile vision system, are presented. This system uses the view from the top of a small, roving, robotic vehicle to supply data that is processed in real-time to safely navigate the surface of Mars. A simple, low-computation algorithm for constructing a 3-D navigational map of the Martian environment to be used by the rover is discussed.

  17. Computational models of human vision with applications

    NASA Technical Reports Server (NTRS)

    Wandell, B. A.

    1985-01-01

    Perceptual problems in aeronautics were studied. The mechanism by which color constancy is achieved in human vision was examined. A computable algorithm was developed to model the arrangement of retinal cones in spatial vision. The spatial frequency spectra are similar to the spectra of actual cone mosaics. The Hartley transform as a tool of image processing was evaluated and it is suggested that it could be used in signal processing applications, GR image processing.

  18. Variability of quadriceps femoris motor neuron discharge and muscle force in human aging.

    PubMed

    Welsh, Seth J; Dinenno, Devin V; Tracy, Brian L

    2007-05-01

    The purpose was to determine the contribution of visual feedback and the effect of aging on the variability of knee extensor (KE) muscle force and motor unit (MU) discharge. Single MUs were recorded during two types of isometric trials, (1) visual feedback provided (VIS) and then removed (NOVIS) during the trial (34 MUs from young, 32 from elderly), and (2) only NOVIS (66 MUs from young, 77 from elderly) during the trial. Recruitment threshold (RT) ranged from 0-37% MVC. Standard deviation (SD) and coefficient of variation (CV) of muscle force and MU interspike interval (ISI) was measured during steady contractions at target forces ranging from 0.3 to 54% MVC. Force drift (<0.5 Hz) was removed before analysis. VIS/NOVIS trials: the decrease in the CV of ISI from VIS to NOVIS was greater for MUs from elderly (12.5 +/- 4.1 to 9.94 +/- 2.6%) than young (10.6 +/- 3.3 to 10.3 +/- 2.8%, age group x vision interaction, P = 0.006). The change in CV of force from VIS to NOVIS was significantly greater for elderly (1.45 to 1.05%) than young (1.42 to 1.41%). NOVIS only trials: for all MUs, the average RT (6.6 +/- 7.7 % MVC), target force above RT (1.20 +/- 2.7% MVC), SD of ISI (0.012 +/- 0.005 s), and CV of ISI (11.1 +/- 3.3%) were similar for young and elderly MUs. The CV of force was similar between age groups for trials between 0 and 3% MVC (1.74 +/- 0.74%) and was greater for young subjects from 3 to 10% MVC (1.47 +/- 0.5 vs. 1.21 +/- 0.4%) and >10% MVC (1.44 +/- 0.6 vs. 1.01 +/- 0.3%). The CV of ISI was similar between age groups for MUs in 0-3, 3-10, and >10% bins of RT. Thus, the contribution of visuomotor correction to the variability of motor unit discharge and force is greater for elderly adults. The presence of visual feedback appears to be necessary to find greater discharge variability in motor units from the knee extensors of elderly adults.

  19. Computer vision syndrome-A common cause of unexplained visual symptoms in the modern era.

    PubMed

    Munshi, Sunil; Varghese, Ashley; Dhar-Munshi, Sushma

    2017-07-01

    The aim of this study was to assess the evidence and available literature on the clinical, pathogenetic, prognostic and therapeutic aspects of Computer vision syndrome. Information was collected from Medline, Embase & National Library of Medicine over the last 30 years up to March 2016. The bibliographies of relevant articles were searched for additional references. Patients with Computer vision syndrome present to a variety of different specialists, including General Practitioners, Neurologists, Stroke physicians and Ophthalmologists. While the condition is common, there is a poor awareness in the public and among health professionals. Recognising this condition in the clinic or in emergency situations like the TIA clinic is crucial. The implications are potentially huge in view of the extensive and widespread use of computers and visual display units. Greater public awareness of Computer vision syndrome and education of health professionals is vital. Preventive strategies should form part of work place ergonomics routinely. Prompt and correct recognition is important to allow management and avoid unnecessary treatments. © 2017 John Wiley & Sons Ltd.

  20. Comparative randomised controlled clinical trial of a herbal eye drop with artificial tear and placebo in computer vision syndrome.

    PubMed

    Biswas, N R; Nainiwal, S K; Das, G K; Langan, U; Dadeya, S C; Mongre, P K; Ravi, A K; Baidya, P

    2003-03-01

    A comparative randomised double masked multicentric clinical trial has been conducted to find out the efficacy and safety of a herbal eye drop preparation, itone eye drops with artificial tear and placebo in 120 patients with computer vision syndrome. Patients using computer for at least 2 hours continuosly per day having symptoms of irritation, foreign body sensation, watering, redness, headache, eyeache and signs of conjunctival congestion, mucous/debris, corneal filaments, corneal staining or lacrimal lake were included in this study. Every patient was instructed to put two drops of either herbal drugs or placebo or artificial tear in the eyes regularly four times for 6 weeks. Objective and subjective findings were recorded at bi-weekly intervals up to six weeks. Side-effects, if any, were also noted. In computer vision syndrome the herbal eye drop preparation was found significantly better than artificial tear (p < 0.01). No side-effects were noted by any of the drugs. Both subjective and objective improvements were observed in itone treated cases. So, itone can be considered as a useful drug in computer vision syndrome.

  1. Computer vision syndrome in presbyopia and beginning presbyopia: effects of spectacle lens type.

    PubMed

    Jaschinski, Wolfgang; König, Mirjam; Mekontso, Tiofil M; Ohlendorf, Arne; Welscher, Monique

    2015-05-01

    This office field study investigated the effects of different types of spectacle lenses habitually worn by computer users with presbyopia and in the beginning stages of presbyopia. Computer vision syndrome was assessed through reported complaints and ergonomic conditions. A questionnaire regarding the type of habitually worn near-vision lenses at the workplace, visual conditions and the levels of different types of complaints was administered to 175 participants aged 35 years and older (mean ± SD: 52.0 ± 6.7 years). Statistical factor analysis identified five specific aspects of the complaints. Workplace conditions were analysed based on photographs taken in typical working conditions. In the subgroup of 25 users between the ages of 36 and 57 years (mean 44 ± 5 years), who wore distance-vision lenses and performed more demanding occupational tasks, the reported extents of 'ocular strain', 'musculoskeletal strain' and 'headache' increased with the daily duration of computer work and explained up to 44 per cent of the variance (rs = 0.66). In the other subgroups, this effect was smaller, while in the complete sample (n = 175), this correlation was approximately rs = 0.2. The subgroup of 85 general-purpose progressive lens users (mean age 54 years) adopted head inclinations that were approximately seven degrees more elevated than those of the subgroups with single vision lenses. The present questionnaire was able to assess the complaints of computer users depending on the type of spectacle lenses worn. A missing near-vision addition among participants in the early stages of presbyopia was identified as a risk factor for complaints among those with longer daily durations of demanding computer work. © 2015 The Authors. Clinical and Experimental Optometry © 2015 Optometry Australia.

  2. Image understanding systems based on the unifying representation of perceptual and conceptual information and the solution of mid-level and high-level vision problems

    NASA Astrophysics Data System (ADS)

    Kuvychko, Igor

    2001-10-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, that is an interpretation of visual information in terms of such knowledge models. A computer vision system based on such principles requires unifying representation of perceptual and conceptual information. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/networks models is found. That means a very important shift of paradigm in our knowledge about brain from neural networks to the cortical software. Starting from the primary visual areas, brain analyzes an image as a graph-type spatial structure. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. The spatial combination of different neighbor features cannot be described as a statistical/integral characteristic of the analyzed region, but uniquely characterizes such region itself. Spatial logic and topology naturally present in such structures. Mid-level vision processes like clustering, perceptual grouping, multilevel hierarchical compression, separation of figure from ground, etc. are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena like shape from shading, occlusion, etc. are results of such analysis. Such approach gives opportunity not only to explain frequently unexplainable results of the cognitive science, but also to create intelligent computer vision systems that simulate perceptional processes in both what and where visual pathways. Such systems can open new horizons for robotic and computer vision industries.

  3. Computer vision syndrome (CVS) - Thermographic Analysis

    NASA Astrophysics Data System (ADS)

    Llamosa-Rincón, L. E.; Jaime-Díaz, J. M.; Ruiz-Cardona, D. F.

    2017-01-01

    The use of computers has reported an exponential growth in the last decades, the possibility of carrying out several tasks for both professional and leisure purposes has contributed to the great acceptance by the users. The consequences and impact of uninterrupted tasks with computers screens or displays on the visual health, have grabbed researcher’s attention. When spending long periods of time in front of a computer screen, human eyes are subjected to great efforts, which in turn triggers a set of symptoms known as Computer Vision Syndrome (CVS). Most common of them are: blurred vision, visual fatigue and Dry Eye Syndrome (DES) due to unappropriate lubrication of ocular surface when blinking decreases. An experimental protocol was de-signed and implemented to perform thermographic studies on healthy human eyes during exposure to dis-plays of computers, with the main purpose of comparing the existing differences in temperature variations of healthy ocular surfaces.

  4. Comparison of holographic and field theoretic complexities for time dependent thermofield double states

    NASA Astrophysics Data System (ADS)

    Yang, Run-Qiu; Niu, Chao; Zhang, Cheng-Yong; Kim, Keun-Young

    2018-02-01

    We compute the time-dependent complexity of the thermofield double states by four different proposals: two holographic proposals based on the "complexity-action" (CA) conjecture and "complexity-volume" (CV) conjecture, and two quantum field theoretic proposals based on the Fubini-Study metric (FS) and Finsler geometry (FG). We find that four different proposals yield both similarities and differences, which will be useful to deepen our understanding on the complexity and sharpen its definition. In particular, at early time the complexity linearly increase in the CV and FG proposals, linearly decreases in the FS proposal, and does not change in the CA proposal. In the late time limit, the CA, CV and FG proposals all show that the growth rate is 2 E/(πℏ) saturating the Lloyd's bound, while the FS proposal shows the growth rate is zero. It seems that the holographic CV conjecture and the field theoretic FG method are more correlated.

  5. Milestones on the road to independence for the blind

    NASA Astrophysics Data System (ADS)

    Reed, Kenneth

    1997-02-01

    Ken will talk about his experiences as an end user of technology. Even moderate technological progress in the field of pattern recognition and artificial intelligence can be, often surprisingly, of great help to the blind. An example is the providing of portable bar code scanners so that a blind person knows what he is buying and what color it is. In this age of microprocessors controlling everything, how can a blind person find out what his VCR is doing? Is there some technique that will allow a blind musician to convert print music into midi files to drive a synthesizer? Can computer vision help the blind cross a road including predictions of where oncoming traffic will be located? Can computer vision technology provide spoken description of scenes so a blind person can figure out where doors and entrances are located, and what the signage on the building says? He asks 'can computer vision help me flip a pancake?' His challenge to those in the computer vision field is 'where can we go from here?'

  6. A large-scale solar dynamics observatory image dataset for computer vision applications.

    PubMed

    Kucuk, Ahmet; Banda, Juan M; Angryk, Rafal A

    2017-01-01

    The National Aeronautics Space Agency (NASA) Solar Dynamics Observatory (SDO) mission has given us unprecedented insight into the Sun's activity. By capturing approximately 70,000 images a day, this mission has created one of the richest and biggest repositories of solar image data available to mankind. With such massive amounts of information, researchers have been able to produce great advances in detecting solar events. In this resource, we compile SDO solar data into a single repository in order to provide the computer vision community with a standardized and curated large-scale dataset of several hundred thousand solar events found on high resolution solar images. This publicly available resource, along with the generation source code, will accelerate computer vision research on NASA's solar image data by reducing the amount of time spent performing data acquisition and curation from the multiple sources we have compiled. By improving the quality of the data with thorough curation, we anticipate a wider adoption and interest from the computer vision to the solar physics community.

  7. Optical hybrid quantum teleportation and its applications

    NASA Astrophysics Data System (ADS)

    Takeda, Shuntaro; Okada, Masanori; Furusawa, Akira

    2017-08-01

    Quantum teleportation, a transfer protocol of quantum states, is the essence of many sophisticated quantum information protocols. There have been two complementary approaches to optical quantum teleportation: discrete variables (DVs) and continuous variables (CVs). However, both approaches have pros and cons. Here we take a "hybrid" approach to overcome the current limitations: CV quantum teleportation of DVs. This approach enabled the first realization of deterministic quantum teleportation of photonic qubits without post-selection. We also applied the hybrid scheme to several experiments, including entanglement swapping between DVs and CVs, conditional CV teleportation of single photons, and CV teleportation of qutrits. We are now aiming at universal, scalable, and fault-tolerant quantum computing based on these hybrid technologies.

  8. [Computer eyeglasses--aspects of a confusing topic].

    PubMed

    Huber-Spitzy, V; Janeba, E

    1997-01-01

    With the coming into force of the new Austrian Employee Protection Act the issue of the so called "computer glasses" will also gain added importance in our country. Such glasses have been defined as vision aids to be exclusively used for the work on computer monitors and include single-vision glasses solely intended for reading computer screen, glasses with bifocal lenses for reading computer screen and hard-copy documents as well as those with varifocal lenses featuring a thickened central section. There is still a considerable controversy among those concerned as to who will bear the costs for such glasses--most likely it will be the employer. Prescription of such vision aids will be exclusively restricted to ophthalmologists, based on a thorough ophthalmological examination under adequate consideration of the specific working environment and the workplace requirements of the individual employee concerned.

  9. Computer Vision for High-Throughput Quantitative Phenotyping: A Case Study of Grapevine Downy Mildew Sporulation and Leaf Trichomes.

    PubMed

    Divilov, Konstantin; Wiesner-Hanks, Tyr; Barba, Paola; Cadle-Davidson, Lance; Reisch, Bruce I

    2017-12-01

    Quantitative phenotyping of downy mildew sporulation is frequently used in plant breeding and genetic studies, as well as in studies focused on pathogen biology such as chemical efficacy trials. In these scenarios, phenotyping a large number of genotypes or treatments can be advantageous but is often limited by time and cost. We present a novel computational pipeline dedicated to estimating the percent area of downy mildew sporulation from images of inoculated grapevine leaf discs in a manner that is time and cost efficient. The pipeline was tested on images from leaf disc assay experiments involving two F 1 grapevine families, one that had glabrous leaves (Vitis rupestris B38 × 'Horizon' [RH]) and another that had leaf trichomes (Horizon × V. cinerea B9 [HC]). Correlations between computer vision and manual visual ratings reached 0.89 in the RH family and 0.43 in the HC family. Additionally, we were able to use the computer vision system prior to sporulation to measure the percent leaf trichome area. We estimate that an experienced rater scoring sporulation would spend at least 90% less time using the computer vision system compared with the manual visual method. This will allow more treatments to be phenotyped in order to better understand the genetic architecture of downy mildew resistance and of leaf trichome density. We anticipate that this computer vision system will find applications in other pathosystems or traits where responses can be imaged with sufficient contrast from the background.

  10. Knowing what the brain is seeing in three dimensions: A novel, noninvasive, sensitive, accurate, and low-noise technique for measuring ocular torsion.

    PubMed

    Otero-Millan, Jorge; Roberts, Dale C; Lasker, Adrian; Zee, David S; Kheradmand, Amir

    2015-01-01

    Torsional eye movements are rotations of the eye around the line of sight. Measuring torsion is essential to understanding how the brain controls eye position and how it creates a veridical perception of object orientation in three dimensions. Torsion is also important for diagnosis of many vestibular, neurological, and ophthalmological disorders. Currently, there are multiple devices and methods that produce reliable measurements of horizontal and vertical eye movements. Measuring torsion, however, noninvasively and reliably has been a longstanding challenge, with previous methods lacking real-time capabilities or suffering from intrusive artifacts. We propose a novel method for measuring eye movements in three dimensions using modern computer vision software (OpenCV) and concepts of iris recognition. To measure torsion, we use template matching of the entire iris and automatically account for occlusion of the iris and pupil by the eyelids. The current setup operates binocularly at 100 Hz with noise <0.1° and is accurate within 20° of gaze to the left, to the right, and up and 10° of gaze down. This new method can be widely applicable and fill a gap in many scientific and clinical disciplines.

  11. Knowing what the brain is seeing in three dimensions: A novel, noninvasive, sensitive, accurate, and low-noise technique for measuring ocular torsion

    PubMed Central

    Otero-Millan, Jorge; Roberts, Dale C.; Lasker, Adrian; Zee, David S.; Kheradmand, Amir

    2015-01-01

    Torsional eye movements are rotations of the eye around the line of sight. Measuring torsion is essential to understanding how the brain controls eye position and how it creates a veridical perception of object orientation in three dimensions. Torsion is also important for diagnosis of many vestibular, neurological, and ophthalmological disorders. Currently, there are multiple devices and methods that produce reliable measurements of horizontal and vertical eye movements. Measuring torsion, however, noninvasively and reliably has been a longstanding challenge, with previous methods lacking real-time capabilities or suffering from intrusive artifacts. We propose a novel method for measuring eye movements in three dimensions using modern computer vision software (OpenCV) and concepts of iris recognition. To measure torsion, we use template matching of the entire iris and automatically account for occlusion of the iris and pupil by the eyelids. The current setup operates binocularly at 100 Hz with noise <0.1° and is accurate within 20° of gaze to the left, to the right, and up and 10° of gaze down. This new method can be widely applicable and fill a gap in many scientific and clinical disciplines. PMID:26587699

  12. Detection and Tracking of Moving Objects with Real-Time Onboard Vision System

    NASA Astrophysics Data System (ADS)

    Erokhin, D. Y.; Feldman, A. B.; Korepanov, S. E.

    2017-05-01

    Detection of moving objects in video sequence received from moving video sensor is a one of the most important problem in computer vision. The main purpose of this work is developing set of algorithms, which can detect and track moving objects in real time computer vision system. This set includes three main parts: the algorithm for estimation and compensation of geometric transformations of images, an algorithm for detection of moving objects, an algorithm to tracking of the detected objects and prediction their position. The results can be claimed to create onboard vision systems of aircraft, including those relating to small and unmanned aircraft.

  13. Software architecture for time-constrained machine vision applications

    NASA Astrophysics Data System (ADS)

    Usamentiaga, Rubén; Molleda, Julio; García, Daniel F.; Bulnes, Francisco G.

    2013-01-01

    Real-time image and video processing applications require skilled architects, and recent trends in the hardware platform make the design and implementation of these applications increasingly complex. Many frameworks and libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing applications. However, they tend to lack flexibility, because they are normally oriented toward particular types of applications, or they impose specific data processing models such as the pipeline. Other issues include large memory footprints, difficulty for reuse, and inefficient execution on multicore processors. We present a novel software architecture for time-constrained machine vision applications that addresses these issues. The architecture is divided into three layers. The platform abstraction layer provides a high-level application programming interface for the rest of the architecture. The messaging layer provides a message-passing interface based on a dynamic publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route the messages from the publishers to the subscribers interested in a particular type of message. The application layer provides a repository for reusable application modules designed for machine vision applications. These modules, which include acquisition, visualization, communication, user interface, and data processing, take advantage of the power of well-known libraries such as OpenCV, Intel IPP, or CUDA. Finally, the proposed architecture is applied to a real machine vision application: a jam detector for steel pickling lines.

  14. Computer graphics testbed to simulate and test vision systems for space applications

    NASA Technical Reports Server (NTRS)

    Cheatham, John B.

    1991-01-01

    Research activity has shifted from computer graphics and vision systems to the broader scope of applying concepts of artificial intelligence to robotics. Specifically, the research is directed toward developing Artificial Neural Networks, Expert Systems, and Laser Imaging Techniques for Autonomous Space Robots.

  15. Computer Vision Photogrammetry for Underwater Archaeological Site Recording in a Low-Visibility Environment

    NASA Astrophysics Data System (ADS)

    Van Damme, T.

    2015-04-01

    Computer Vision Photogrammetry allows archaeologists to accurately record underwater sites in three dimensions using simple twodimensional picture or video sequences, automatically processed in dedicated software. In this article, I share my experience in working with one such software package, namely PhotoScan, to record a Dutch shipwreck site. In order to demonstrate the method's reliability and flexibility, the site in question is reconstructed from simple GoPro footage, captured in low-visibility conditions. Based on the results of this case study, Computer Vision Photogrammetry compares very favourably to manual recording methods both in recording efficiency, and in the quality of the final results. In a final section, the significance of Computer Vision Photogrammetry is then assessed from a historical perspective, by placing the current research in the wider context of about half a century of successful use of Analytical and later Digital photogrammetry in the field of underwater archaeology. I conclude that while photogrammetry has been used in our discipline for several decades now, for various reasons the method was only ever used by a relatively small percentage of projects. This is likely to change in the near future since, compared to the `traditional' photogrammetry approaches employed in the past, today Computer Vision Photogrammetry is easier to use, more reliable and more affordable than ever before, while at the same time producing more accurate and more detailed three-dimensional results.

  16. Dynamic Estimation of Rigid Motion from Perspective Views via Recursive Identification of Exterior Differential Systems with Parameters on a Topological Manifold

    DTIC Science & Technology

    1994-02-15

    0. Faugeras. Three dimensional vision, a geometric viewpoint. MIT Press, 1993. [19] 0 . D. Faugeras and S. Maybank . Motion from point mathces...multiplicity of solutions. Int. J. of Computer Vision, 1990. 1201 0.D. Faugeras, Q.T. Luong, and S.J. Maybank . Camera self-calibration: theory and...Kalrnan filter-based algorithms for estimating depth from image sequences. Int. J. of computer vision, 1989. [41] S. Maybank . Theory of

  17. Computational Vision: A Critical Review

    DTIC Science & Technology

    1989-10-01

    Optic News, 15:9-25, 1989. [8] H. B . Barlow and R. W. Levick . The mechanism of directional selectivity in the rabbit’s retina. J. Physiol., 173:477...comparison, other formulations, e.g., [64], used 16 @V A \\E(t=t2) (a) \\ E(t-tl) ( b ) Figure 7: An illustration of the aperture problem. Left: a bar E is...Ballard and C. M. Brown. Computer Vision. Prentice-Hall, Englewood Cliffs, NJ, 1982. [7] D. H. Ballard, R. C. Nelson, and B . Yamauchi. Animate vision

  18. Marking parts to aid robot vision

    NASA Technical Reports Server (NTRS)

    Bales, J. W.; Barker, L. K.

    1981-01-01

    The premarking of parts for subsequent identification by a robot vision system appears to be beneficial as an aid in the automation of certain tasks such as construction in space. A simple, color coded marking system is presented which allows a computer vision system to locate an object, calculate its orientation, and determine its identity. Such a system has the potential to operate accurately, and because the computer shape analysis problem has been simplified, it has the ability to operate in real time.

  19. Dynamics of propagation of premature impulses in structurally remodeled infarcted myocardium: a computational analysis

    PubMed Central

    Cabo, Candido

    2014-01-01

    Initiation of cardiac arrhythmias typically follows one or more premature impulses either occurring spontaneously or applied externally. In this study, we characterize the dynamics of propagation of single (S2) and double premature impulses (S3), and the mechanisms of block of premature impulses at structural heterogeneities caused by remodeling of gap junctional conductance (Gj) in infarcted myocardium. Using a sub-cellular computer model of infarcted tissue, we found that |INa,max|, prematurity (coupling interval with the previous impulse), and conduction velocity (CV) of premature impulses change dynamically as they propagate away from the site of initiation. There are fundamental differences between the dynamics of propagation of S2 and S3 premature impulses: for S2 impulses |INa,max| recovers fast, prematurity decreases and CV increases as propagation proceeds; for S3 impulses low values of |INa,max| persist, prematurity could increase, and CV could decrease as impulses propagate away from the site of initiation. As a consequence it is more likely that S3 impulses block at sites of structural heterogeneities causing source/sink mismatch than S2 impulses block. Whether premature impulses block at Gj heterogeneities or not is also determined by the values of Gj (and the space constant λ) in the regions proximal and distal to the heterogeneity: when λ in the direction of propagation increases >40%, premature impulses could block. The maximum slope of CV restitution curves for S2 impulses is larger than for S3 impulses. In conclusion: (1) The dynamics of propagation of premature impulses make more likely that S3 impulses block at sites of structural heterogeneities than S2 impulses block; (2) Structural heterogeneities causing an increase in λ (or CV) of >40% could result in block of premature impulses; (3) A decrease in the maximum slope of CV restitution curves of propagating premature impulses is indicative of an increased potential for block at structural heterogeneities. PMID:25566085

  20. Dynamics of propagation of premature impulses in structurally remodeled infarcted myocardium: a computational analysis.

    PubMed

    Cabo, Candido

    2014-01-01

    Initiation of cardiac arrhythmias typically follows one or more premature impulses either occurring spontaneously or applied externally. In this study, we characterize the dynamics of propagation of single (S2) and double premature impulses (S3), and the mechanisms of block of premature impulses at structural heterogeneities caused by remodeling of gap junctional conductance (Gj) in infarcted myocardium. Using a sub-cellular computer model of infarcted tissue, we found that |INa,max|, prematurity (coupling interval with the previous impulse), and conduction velocity (CV) of premature impulses change dynamically as they propagate away from the site of initiation. There are fundamental differences between the dynamics of propagation of S2 and S3 premature impulses: for S2 impulses |INa,max| recovers fast, prematurity decreases and CV increases as propagation proceeds; for S3 impulses low values of |INa,max| persist, prematurity could increase, and CV could decrease as impulses propagate away from the site of initiation. As a consequence it is more likely that S3 impulses block at sites of structural heterogeneities causing source/sink mismatch than S2 impulses block. Whether premature impulses block at Gj heterogeneities or not is also determined by the values of Gj (and the space constant λ) in the regions proximal and distal to the heterogeneity: when λ in the direction of propagation increases >40%, premature impulses could block. The maximum slope of CV restitution curves for S2 impulses is larger than for S3 impulses. (1) The dynamics of propagation of premature impulses make more likely that S3 impulses block at sites of structural heterogeneities than S2 impulses block; (2) Structural heterogeneities causing an increase in λ (or CV) of >40% could result in block of premature impulses; (3) A decrease in the maximum slope of CV restitution curves of propagating premature impulses is indicative of an increased potential for block at structural heterogeneities.

  1. Complex-valued time-series correlation increases sensitivity in FMRI analysis.

    PubMed

    Kociuba, Mary C; Rowe, Daniel B

    2016-07-01

    To develop a linear matrix representation of correlation between complex-valued (CV) time-series in the temporal Fourier frequency domain, and demonstrate its increased sensitivity over correlation between magnitude-only (MO) time-series in functional MRI (fMRI) analysis. The standard in fMRI is to discard the phase before the statistical analysis of the data, despite evidence of task related change in the phase time-series. With a real-valued isomorphism representation of Fourier reconstruction, correlation is computed in the temporal frequency domain with CV time-series data, rather than with the standard of MO data. A MATLAB simulation compares the Fisher-z transform of MO and CV correlations for varying degrees of task related magnitude and phase amplitude change in the time-series. The increased sensitivity of the complex-valued Fourier representation of correlation is also demonstrated with experimental human data. Since the correlation description in the temporal frequency domain is represented as a summation of second order temporal frequencies, the correlation is easily divided into experimentally relevant frequency bands for each voxel's temporal frequency spectrum. The MO and CV correlations for the experimental human data are analyzed for four voxels of interest (VOIs) to show the framework with high and low contrast-to-noise ratios in the motor cortex and the supplementary motor cortex. The simulation demonstrates the increased strength of CV correlations over MO correlations for low magnitude contrast-to-noise time-series. In the experimental human data, the MO correlation maps are noisier than the CV maps, and it is more difficult to distinguish the motor cortex in the MO correlation maps after spatial processing. Including both magnitude and phase in the spatial correlation computations more accurately defines the correlated left and right motor cortices. Sensitivity in correlation analysis is important to preserve the signal of interest in fMRI data sets with high noise variance, and avoid excessive processing induced correlation. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Real-time detecting and tracking ball with OpenCV and Kinect

    NASA Astrophysics Data System (ADS)

    Osiecki, Tomasz; Jankowski, Stanislaw

    2016-09-01

    This paper presents a way to detect and track ball with using the OpenCV and Kinect. Object and people recognition, tracking are more and more popular topics nowadays. Described solution makes it possible to detect ball based on the range, which is set by the user and capture information about ball position in three dimensions. It can be store in the computer and use for example to display trajectory of the ball.

  3. Topographic Mapping of Residual Vision by Computer

    ERIC Educational Resources Information Center

    MacKeben, Manfred

    2008-01-01

    Many persons with low vision have diseases that damage the retina only in selected areas, which can lead to scotomas (blind spots) in perception. The most frequent of these diseases is age-related macular degeneration (AMD), in which foveal vision is often impaired by a central scotoma that impairs vision of fine detail and causes problems with…

  4. Artificial intelligence, expert systems, computer vision, and natural language processing

    NASA Technical Reports Server (NTRS)

    Gevarter, W. B.

    1984-01-01

    An overview of artificial intelligence (AI), its core ingredients, and its applications is presented. The knowledge representation, logic, problem solving approaches, languages, and computers pertaining to AI are examined, and the state of the art in AI is reviewed. The use of AI in expert systems, computer vision, natural language processing, speech recognition and understanding, speech synthesis, problem solving, and planning is examined. Basic AI topics, including automation, search-oriented problem solving, knowledge representation, and computational logic, are discussed.

  5. The Effect of the Usage of Computer-Based Assistive Devices on the Functioning and Quality of Life of Individuals Who Are Blind or Have Low Vision

    ERIC Educational Resources Information Center

    Rosner, Yotam; Perlman, Amotz

    2018-01-01

    Introduction: The Israel Ministry of Social Affairs and Social Services subsidizes computer-based assistive devices for individuals with visual impairments (that is, those who are blind or have low vision) to assist these individuals in their interactions with computers and thus to enhance their independence and quality of life. The aim of this…

  6. [Ophthalmologist and "computer vision syndrome"].

    PubMed

    Barar, A; Apatachioaie, Ioana Daniela; Apatachioaie, C; Marceanu-Brasov, L

    2007-01-01

    The authors had tried to collect the data available on the Internet about a subject that we consider as being totally ignored in the Romanian scientific literature and unexpectedly insufficiently treated in the specialized ophthalmologic literature. Known in the specialty literature under the generic name of "Computer vision syndrome", it is defined by the American Optometric Association as a complex of eye and vision problems related to the activities which stress the near vision and which are experienced in relation, or during, the use of the computer. During the consultations we hear frequent complaints of eye-strain - asthenopia, headaches, blurred distance and/or near vision, dry and irritated eyes, slow refocusing, neck and backache, photophobia, sensation of diplopia, light sensitivity, and double vision, but because of the lack of information, we overlooked them too easily, without going thoroughly into the real motives. In most of the developed countries, there are recommendations issued by renowned medical associations with regard to the definition, the diagnosis, and the methods for the prevention, treatment and periodical control of the symptoms found in computer users, in conjunction with an extremely detailed ergonomic legislation. We found out that these problems incite a much too low interest in our country. We would like to rouse the interest of our ophthalmologist colleagues in the understanding and the recognition of these symptoms and in their treatment, or at least their improvement, through specialized measures or through the cooperation with our specialist occupational medicine colleagues.

  7. Non-perturbative determination of cV, ZV and ZS/ZP in Nf = 3 lattice QCD

    NASA Astrophysics Data System (ADS)

    Heitger, Jochen; Joswig, Fabian; Vladikas, Anastassios; Wittemeier, Christian

    2018-03-01

    We report on non-perturbative computations of the improvement coefficient cV and the renormalization factor ZV of the vector current in three-flavour O(a) improved lattice QCD with Wilson quarks and tree-level Symanzik improved gauge action. To reduce finite quark mass effects, our improvement and normalization conditions exploit massive chiral Ward identities formulated in the Schrödinger functional setup, which also allow deriving a new method to extract the ratio ZS/ZP of scalar to pseudoscalar renormalization constants. We present preliminary results of a numerical evaluation of ZV and cV along a line of constant physics with gauge couplings corresponding to lattice spacings of about 0:09 fm and below, relevant for phenomenological applications.

  8. Intracranial stimulation of the trigeminal nerve in man. III. Sensory potentials.

    PubMed Central

    Cruccu, G; Inghilleri, M; Manfredi, M; Meglio, M

    1987-01-01

    Percutaneous electrical stimulation of the trigeminal root was performed in 18 subjects undergoing surgery for idiopathic trigeminal neuralgia or implantation of electrodes into Meckel's cave for recording of limbic epileptic activity. All subjects had normal trigeminal reflexes and evoked potentials. Sensory action potentials were recorded antidromically from the supraorbital (V1), infraorbital (V2) and mental (V3) nerves. In the awake subject, sensory potentials were usually followed by myogenic artifacts due to direct activation of masticatory muscles or reflex activation of facial muscles. In the anaesthetised and curarised subject, sensory potentials from the three nerves showed 1.4-2.2 ms onset latency, 1.9-2.7 ms peak latency and 17-29 microV amplitude. Sensory conduction velocity was computed at the onset latency (maximum CV) and at the peak latency (peak CV). On average, maximum and peak CV were 52 and 39 m/s for V1, 54 and 42 m/s for V2 and 54 and 44 m/s for V3. There was no apparent difference in CV between subjects with trigeminal neuralgia and those with epilepsy. A significant inverse correlation was found between CV and age, the overall maximum CV declining from 59 m/s (16 years) to 49 m/s (73 years). This range of CV is compatible both with histometric data and previous electrophysiological findings on trigeminal nerve conduction. Intraoperative intracranial stimulation is also proposed as a method of monitoring trigeminal function under general anaesthesia. Images PMID:3681311

  9. A comparison of symptoms after viewing text on a computer screen and hardcopy.

    PubMed

    Chu, Christina; Rosenfield, Mark; Portello, Joan K; Benzoni, Jaclyn A; Collier, Juanita D

    2011-01-01

    Computer vision syndrome (CVS) is a complex of eye and vision problems experienced during or related to computer use. Ocular symptoms may include asthenopia, accommodative and vergence difficulties and dry eye. CVS occurs in up to 90% of computer workers, and given the almost universal use of these devices, it is important to identify whether these symptoms are specific to computer operation, or are simply a manifestation of performing a sustained near-vision task. This study compared ocular symptoms immediately following a sustained near task. 30 young, visually-normal subjects read text aloud either from a desktop computer screen or a printed hardcopy page at a viewing distance of 50 cm for a continuous 20 min period. Identical text was used in the two sessions, which was matched for size and contrast. Target viewing angle and luminance were similar for the two conditions. Immediately following completion of the reading task, subjects completed a written questionnaire asking about their level of ocular discomfort during the task. When comparing the computer and hardcopy conditions, significant differences in median symptom scores were reported with regard to blurred vision during the task (t = 147.0; p = 0.03) and the mean symptom score (t = 102.5; p = 0.04). In both cases, symptoms were higher during computer use. Symptoms following sustained computer use were significantly worse than those reported after hard copy fixation under similar viewing conditions. A better understanding of the physiology underlying CVS is critical to allow more accurate diagnosis and treatment. This will allow practitioners to optimize visual comfort and efficiency during computer operation.

  10. Lumber Grading With A Computer Vision System

    Treesearch

    Richard W. Conners; Tai-Hoon Cho; Philip A. Araman

    1989-01-01

    Over the past few years significant progress has been made in developing a computer vision system for locating and identifying defects on surfaced hardwood lumber. Unfortunately, until September of 1988 little research had gone into developing methods for analyzing rough lumber. This task is arguably more complex than the analysis of surfaced lumber. The prime...

  11. Implementation of Automatic Focusing Algorithms for a Computer Vision System with Camera Control.

    DTIC Science & Technology

    1983-08-15

    obtainable from real data, rather than relying on a stock database. Often, computer vision and image processing algorithms become subconsciously tuned to...two coils on the same mount structure. Since it was not possible to reprogram the binary system, we turned to the POPEYE system for both its grey

  12. Quality Parameters of Six Cultivars of Blueberry Using Computer Vision

    PubMed Central

    Celis Cofré, Daniela; Silva, Patricia; Enrione, Javier; Osorio, Fernando

    2013-01-01

    Background. Blueberries are considered an important source of health benefits. This work studied six blueberry cultivars: “Duke,” “Brigitta”, “Elliott”, “Centurion”, “Star,” and “Jewel”, measuring quality parameters such as °Brix, pH, moisture content using standard techniques and shape, color, and fungal presence obtained by computer vision. The storage conditions were time (0–21 days), temperature (4 and 15°C), and relative humidity (75 and 90%). Results. Significant differences (P < 0.05) were detected between fresh cultivars in pH, °Brix, shape, and color. However, the main parameters which changed depending on storage conditions, increasing at higher temperature, were color (from blue to red) and fungal presence (from 0 to 15%), both detected using computer vision, which is important to determine a shelf life of 14 days for all cultivars. Similar behavior during storage was obtained for all cultivars. Conclusion. Computer vision proved to be a reliable and simple method to objectively determine blueberry decay during storage that can be used as an alternative approach to currently used subjective measurements. PMID:26904598

  13. Image processing and pattern recognition with CVIPtools MATLAB toolbox: automatic creation of masks for veterinary thermographic images

    NASA Astrophysics Data System (ADS)

    Mishra, Deependra K.; Umbaugh, Scott E.; Lama, Norsang; Dahal, Rohini; Marino, Dominic J.; Sackman, Joseph

    2016-09-01

    CVIPtools is a software package for the exploration of computer vision and image processing developed in the Computer Vision and Image Processing Laboratory at Southern Illinois University Edwardsville. CVIPtools is available in three variants - a) CVIPtools Graphical User Interface, b) CVIPtools C library and c) CVIPtools MATLAB toolbox, which makes it accessible to a variety of different users. It offers students, faculty, researchers and any user a free and easy way to explore computer vision and image processing techniques. Many functions have been implemented and are updated on a regular basis, the library has reached a level of sophistication that makes it suitable for both educational and research purposes. In this paper, the detail list of the functions available in the CVIPtools MATLAB toolbox are presented and how these functions can be used in image analysis and computer vision applications. The CVIPtools MATLAB toolbox allows the user to gain practical experience to better understand underlying theoretical problems in image processing and pattern recognition. As an example application, the algorithm for the automatic creation of masks for veterinary thermographic images is presented.

  14. Applications of Computer Vision for Assessing Quality of Agri-food Products: A Review of Recent Research Advances.

    PubMed

    Ma, Ji; Sun, Da-Wen; Qu, Jia-Huan; Liu, Dan; Pu, Hongbin; Gao, Wen-Hong; Zeng, Xin-An

    2016-01-01

    With consumer concerns increasing over food quality and safety, the food industry has begun to pay much more attention to the development of rapid and reliable food-evaluation systems over the years. As a result, there is a great need for manufacturers and retailers to operate effective real-time assessments for food quality and safety during food production and processing. Computer vision, comprising a nondestructive assessment approach, has the aptitude to estimate the characteristics of food products with its advantages of fast speed, ease of use, and minimal sample preparation. Specifically, computer vision systems are feasible for classifying food products into specific grades, detecting defects, and estimating properties such as color, shape, size, surface defects, and contamination. Therefore, in order to track the latest research developments of this technology in the agri-food industry, this review aims to present the fundamentals and instrumentation of computer vision systems with details of applications in quality assessment of agri-food products from 2007 to 2013 and also discuss its future trends in combination with spectroscopy.

  15. Development of embedded real-time and high-speed vision platform

    NASA Astrophysics Data System (ADS)

    Ouyang, Zhenxing; Dong, Yimin; Yang, Hua

    2015-12-01

    Currently, high-speed vision platforms are widely used in many applications, such as robotics and automation industry. However, a personal computer (PC) whose over-large size is not suitable and applicable in compact systems is an indispensable component for human-computer interaction in traditional high-speed vision platforms. Therefore, this paper develops an embedded real-time and high-speed vision platform, ER-HVP Vision which is able to work completely out of PC. In this new platform, an embedded CPU-based board is designed as substitution for PC and a DSP and FPGA board is developed for implementing image parallel algorithms in FPGA and image sequential algorithms in DSP. Hence, the capability of ER-HVP Vision with size of 320mm x 250mm x 87mm can be presented in more compact condition. Experimental results are also given to indicate that the real-time detection and counting of the moving target at a frame rate of 200 fps at 512 x 512 pixels under the operation of this newly developed vision platform are feasible.

  16. Computer vision syndrome: a study of the knowledge, attitudes and practices in Indian ophthalmologists.

    PubMed

    Bali, Jatinder; Navin, Neeraj; Thakur, Bali Renu

    2007-01-01

    To study the knowledge, attitude and practices (KAP) towards computer vision syndrome prevalent in Indian ophthalmologists and to assess whether 'computer use by practitioners' had any bearing on the knowledge and practices in computer vision syndrome (CVS). A random KAP survey was carried out on 300 Indian ophthalmologists using a 34-point spot-questionnaire in January 2005. All the doctors who responded were aware of CVS. The chief presenting symptoms were eyestrain (97.8%), headache (82.1%), tiredness and burning sensation (79.1%), watering (66.4%) and redness (61.2%). Ophthalmologists using computers reported that focusing from distance to near and vice versa (P =0.006, chi2 test), blurred vision at a distance (P =0.016, chi2 test) and blepharospasm (P =0.026, chi2 test) formed part of the syndrome. The main mode of treatment used was tear substitutes. Half of ophthalmologists (50.7%) were not prescribing any spectacles. They did not have any preference for any special type of glasses (68.7%) or spectral filters. Computer-users were more likely to prescribe sedatives/anxiolytics (P = 0.04, chi2 test), spectacles (P = 0.02, chi2 test) and conscious frequent blinking (P = 0.003, chi2 test) than the non-computer-users. All respondents were aware of CVS. Confusion regarding treatment guidelines was observed in both groups. Computer-using ophthalmologists were more informed of symptoms and diagnostic signs but were misinformed about treatment modalities.

  17. Fusion of Multiple Sensing Modalities for Machine Vision

    DTIC Science & Technology

    1994-05-31

    Modeling of Non-Homogeneous 3-D Objects for Thermal and Visual Image Synthesis," Pattern Recognition, in press. U [11] Nair, Dinesh , and J. K. Aggarwal...20th AIPR Workshop: Computer Vision--Meeting the Challenges, McLean, Virginia, October 1991. Nair, Dinesh , and J. K. Aggarwal, "An Object Recognition...Computer Engineering August 1992 Sunil Gupta Ph.D. Student Mohan Kumar M.S. Student Sandeep Kumar M.S. Student Xavier Lebegue Ph.D., Computer

  18. The Implications of Pervasive Computing on Network Design

    NASA Astrophysics Data System (ADS)

    Briscoe, R.

    Mark Weiser's late-1980s vision of an age of calm technology with pervasive computing disappearing into the fabric of the world [1] has been tempered by an industry-driven vision with more of a feel of conspicuous consumption. In the modified version, everyone carries around consumer electronics to provide natural, seamless interactions both with other people and with the information world, particularly for eCommerce, but still through a pervasive computing fabric.

  19. Use of 3D vision for fine robot motion

    NASA Technical Reports Server (NTRS)

    Lokshin, Anatole; Litwin, Todd

    1989-01-01

    An integration of 3-D vision systems with robot manipulators will allow robots to operate in a poorly structured environment by visually locating targets and obstacles. However, by using computer vision for objects acquisition makes the problem of overall system calibration even more difficult. Indeed, in a CAD based manipulation a control architecture has to find an accurate mapping between the 3-D Euclidean work space and a robot configuration space (joint angles). If a stereo vision is involved, then one needs to map a pair of 2-D video images directly into the robot configuration space. Neural Network approach aside, a common solution to this problem is to calibrate vision and manipulator independently, and then tie them via common mapping into the task space. In other words, both vision and robot refer to some common Absolute Euclidean Coordinate Frame via their individual mappings. This approach has two major difficulties. First a vision system has to be calibrated over the total work space. And second, the absolute frame, which is usually quite arbitrary, has to be the same with a high degree of precision for both robot and vision subsystem calibrations. The use of computer vision to allow robust fine motion manipulation in a poorly structured world which is currently in progress is described along with the preliminary results and encountered problems.

  20. Riemann tensor of motion vision revisited.

    PubMed

    Brill, M

    2001-07-02

    This note shows that the Riemann-space interpretation of motion vision developed by Barth and Watson is neither necessary for their results, nor sufficient to handle an intrinsic coordinate problem. Recasting the Barth-Watson framework as a classical velocity-solver (as in computer vision) solves these problems.

  1. Evaluation of the Waggoner Computerized Color Vision Test.

    PubMed

    Ng, Jason S; Self, Eriko; Vanston, John E; Nguyen, Andrew L; Crognale, Michael A

    2015-04-01

    Clinical color vision evaluation has been based primarily on the same set of tests for the past several decades. Recently, computer-based color vision tests have been devised, and these have several advantages but are still not widely used. In this study, we evaluated the Waggoner Computerized Color Vision Test (CCVT), which was developed for widespread use with common computer systems. A sample of subjects with (n = 59) and without (n = 361) color vision deficiency (CVD) were tested on the CCVT, the anomaloscope, the Richmond HRR (Hardy-Rand-Rittler) (4th edition), and the Ishihara test. The CCVT was administered in two ways: (1) on a computer monitor using its default settings and (2) on one standardized to a correlated color temperature (CCT) of 6500 K. Twenty-four subjects with CVD performed the CCVT both ways. Sensitivity, specificity, and correct classification rates were determined. The screening performance of the CCVT was good (95% sensitivity, 100% specificity). The CCVT classified subjects as deutan or protan in agreement with anomaloscopy 89% of the time. It generally classified subjects as having a more severe defect compared with other tests. Results from 18 of the 24 subjects with CVD tested under both default and calibrated CCT conditions were the same, whereas the results from 6 subjects had better agreement with other test results when the CCT was set. The Waggoner CCVT is an adequate color vision screening test with several advantages and appears to provide a fairly accurate diagnosis of deficiency type. Used in conjunction with other color vision tests, it may be a useful addition to a color vision test battery.

  2. Composition-based classification of short metagenomic sequences elucidates the landscapes of taxonomic and functional enrichment of microorganisms

    PubMed Central

    Liu, Jiemeng; Wang, Haifeng; Yang, Hongxing; Zhang, Yizhe; Wang, Jinfeng; Zhao, Fangqing; Qi, Ji

    2013-01-01

    Compared with traditional algorithms for long metagenomic sequence classification, characterizing microorganisms’ taxonomic and functional abundance based on tens of millions of very short reads are much more challenging. We describe an efficient composition and phylogeny-based algorithm [Metagenome Composition Vector (MetaCV)] to classify very short metagenomic reads (75–100 bp) into specific taxonomic and functional groups. We applied MetaCV to the Meta-HIT data (371-Gb 75-bp reads of 109 human gut metagenomes), and this single-read-based, instead of assembly-based, classification has a high resolution to characterize the composition and structure of human gut microbiota, especially for low abundance species. Most strikingly, it only took MetaCV 10 days to do all the computation work on a server with five 24-core nodes. To our knowledge, MetaCV, benefited from the strategy of composition comparison, is the first algorithm that can classify millions of very short reads within affordable time. PMID:22941634

  3. Microwave vision for robots

    NASA Technical Reports Server (NTRS)

    Lewandowski, Leon; Struckman, Keith

    1994-01-01

    Microwave Vision (MV), a concept originally developed in 1985, could play a significant role in the solution to robotic vision problems. Originally our Microwave Vision concept was based on a pattern matching approach employing computer based stored replica correlation processing. Artificial Neural Network (ANN) processor technology offers an attractive alternative to the correlation processing approach, namely the ability to learn and to adapt to changing environments. This paper describes the Microwave Vision concept, some initial ANN-MV experiments, and the design of an ANN-MV system that has led to a second patent disclosure in the robotic vision field.

  4. WASS: An open-source pipeline for 3D stereo reconstruction of ocean waves

    NASA Astrophysics Data System (ADS)

    Bergamasco, Filippo; Torsello, Andrea; Sclavo, Mauro; Barbariol, Francesco; Benetazzo, Alvise

    2017-10-01

    Stereo 3D reconstruction of ocean waves is gaining more and more popularity in the oceanographic community and industry. Indeed, recent advances of both computer vision algorithms and computer processing power now allow the study of the spatio-temporal wave field with unprecedented accuracy, especially at small scales. Even if simple in theory, multiple details are difficult to be mastered for a practitioner, so that the implementation of a sea-waves 3D reconstruction pipeline is in general considered a complex task. For instance, camera calibration, reliable stereo feature matching and mean sea-plane estimation are all factors for which a well designed implementation can make the difference to obtain valuable results. For this reason, we believe that the open availability of a well tested software package that automates the reconstruction process from stereo images to a 3D point cloud would be a valuable addition for future researches in this area. We present WASS (http://www.dais.unive.it/wass), an Open-Source stereo processing pipeline for sea waves 3D reconstruction. Our tool completely automates all the steps required to estimate dense point clouds from stereo images. Namely, it computes the extrinsic parameters of the stereo rig so that no delicate calibration has to be performed on the field. It implements a fast 3D dense stereo reconstruction procedure based on the consolidated OpenCV library and, lastly, it includes set of filtering techniques both on the disparity map and the produced point cloud to remove the vast majority of erroneous points that can naturally arise while analyzing the optically complex nature of the water surface. In this paper, we describe the architecture of WASS and the internal algorithms involved. The pipeline workflow is shown step-by-step and demonstrated on real datasets acquired at sea.

  5. A moving control volume approach to computing hydrodynamic forces and torques on immersed bodies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nangia, Nishant; Johansen, Hans; Patankar, Neelesh A.

    Here, we present a moving control volume (CV) approach to computing hydrodynamic forces and torques on complex geometries. The method requires surface and volumetric integrals over a simple and regular Cartesian box that moves with an arbitrary velocity to enclose the body at all times. The moving box is aligned with Cartesian grid faces, which makes the integral evaluation straightforward in an immersed boundary (IB) framework. Discontinuous and noisy derivatives of velocity and pressure at the fluid–structure interface are avoided and far-field (smooth) velo city and pressure information is used. We re-visit the approach to compute hydrodynamic forces and torquesmore » through force/torque balance equations in a Lagrangian frame that some of us took in a prior work (Bhalla et al., 2013 [13]). We prove the equivalence of the two approaches for IB methods, thanks to the use of Peskin's delta functions. Both approaches are able to suppress spurious force oscillations and are in excellent agreement, as expected theoretically. Test cases ranging from Stokes to high Reynolds number regimes are considered. We discuss regridding issues for the moving CV method in an adaptive mesh refinement (AMR) context. The proposed moving CV method is not limited to a specific IB method and can also be used, for example, with embedded boundary methods.« less

  6. A moving control volume approach to computing hydrodynamic forces and torques on immersed bodies

    DOE PAGES

    Nangia, Nishant; Johansen, Hans; Patankar, Neelesh A.; ...

    2017-10-01

    Here, we present a moving control volume (CV) approach to computing hydrodynamic forces and torques on complex geometries. The method requires surface and volumetric integrals over a simple and regular Cartesian box that moves with an arbitrary velocity to enclose the body at all times. The moving box is aligned with Cartesian grid faces, which makes the integral evaluation straightforward in an immersed boundary (IB) framework. Discontinuous and noisy derivatives of velocity and pressure at the fluid–structure interface are avoided and far-field (smooth) velo city and pressure information is used. We re-visit the approach to compute hydrodynamic forces and torquesmore » through force/torque balance equations in a Lagrangian frame that some of us took in a prior work (Bhalla et al., 2013 [13]). We prove the equivalence of the two approaches for IB methods, thanks to the use of Peskin's delta functions. Both approaches are able to suppress spurious force oscillations and are in excellent agreement, as expected theoretically. Test cases ranging from Stokes to high Reynolds number regimes are considered. We discuss regridding issues for the moving CV method in an adaptive mesh refinement (AMR) context. The proposed moving CV method is not limited to a specific IB method and can also be used, for example, with embedded boundary methods.« less

  7. Statistical Hypothesis Testing using CNN Features for Synthesis of Adversarial Counterexamples to Human and Object Detection Vision Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raj, Sunny; Jha, Sumit Kumar; Pullum, Laura L.

    Validating the correctness of human detection vision systems is crucial for safety applications such as pedestrian collision avoidance in autonomous vehicles. The enormous space of possible inputs to such an intelligent system makes it difficult to design test cases for such systems. In this report, we present our tool MAYA that uses an error model derived from a convolutional neural network (CNN) to explore the space of images similar to a given input image, and then tests the correctness of a given human or object detection system on such perturbed images. We demonstrate the capability of our tool on themore » pre-trained Histogram-of-Oriented-Gradients (HOG) human detection algorithm implemented in the popular OpenCV toolset and the Caffe object detection system pre-trained on the ImageNet benchmark. Our tool may serve as a testing resource for the designers of intelligent human and object detection systems.« less

  8. A machine vision system for micro-EDM based on linux

    NASA Astrophysics Data System (ADS)

    Guo, Rui; Zhao, Wansheng; Li, Gang; Li, Zhiyong; Zhang, Yong

    2006-11-01

    Due to the high precision and good surface quality that it can give, Electrical Discharge Machining (EDM) is potentially an important process for the fabrication of micro-tools and micro-components. However, a number of issues remain unsolved before micro-EDM becomes a reliable process with repeatable results. To deal with the difficulties in micro electrodes on-line fabrication and tool wear compensation, a micro-EDM machine vision system is developed with a Charge Coupled Device (CCD) camera, with an optical resolution of 1.61μm and an overall magnification of 113~729. Based on the Linux operating system, an image capturing program is developed with the V4L2 API, and an image processing program is exploited by using OpenCV. The contour of micro electrodes can be extracted by means of the Canny edge detector. Through the system calibration, the micro electrodes diameter can be measured on-line. Experiments have been carried out to prove its performance, and the reasons of measurement error are also analyzed.

  9. A dental vision system for accurate 3D tooth modeling.

    PubMed

    Zhang, Li; Alemzadeh, K

    2006-01-01

    This paper describes an active vision system based reverse engineering approach to extract the three-dimensional (3D) geometric information from dental teeth and transfer this information into Computer-Aided Design/Computer-Aided Manufacture (CAD/CAM) systems to improve the accuracy of 3D teeth models and at the same time improve the quality of the construction units to help patient care. The vision system involves the development of a dental vision rig, edge detection, boundary tracing and fast & accurate 3D modeling from a sequence of sliced silhouettes of physical models. The rig is designed using engineering design methods such as a concept selection matrix and weighted objectives evaluation chart. Reconstruction results and accuracy evaluation are presented on digitizing different teeth models.

  10. Research on an autonomous vision-guided helicopter

    NASA Technical Reports Server (NTRS)

    Amidi, Omead; Mesaki, Yuji; Kanade, Takeo

    1994-01-01

    Integration of computer vision with on-board sensors to autonomously fly helicopters was researched. The key components developed were custom designed vision processing hardware and an indoor testbed. The custom designed hardware provided flexible integration of on-board sensors with real-time image processing resulting in a significant improvement in vision-based state estimation. The indoor testbed provided convenient calibrated experimentation in constructing real autonomous systems.

  11. Real-time unconstrained object recognition: a processing pipeline based on the mammalian visual system.

    PubMed

    Aguilar, Mario; Peot, Mark A; Zhou, Jiangying; Simons, Stephen; Liao, Yuwei; Metwalli, Nader; Anderson, Mark B

    2012-03-01

    The mammalian visual system is still the gold standard for recognition accuracy, flexibility, efficiency, and speed. Ongoing advances in our understanding of function and mechanisms in the visual system can now be leveraged to pursue the design of computer vision architectures that will revolutionize the state of the art in computer vision.

  12. Automated Grading of Rough Hardwood Lumber

    Treesearch

    Richard W. Conners; Tai-Hoon Cho; Philip A. Araman

    1989-01-01

    Any automatic hardwood grading system must have two components. The first of these is a computer vision system for locating and identifying defects on rough lumber. The second is a system for automatically grading boards based on the output of the computer vision system. This paper presents research results aimed at developing the first of these components. The...

  13. Computer Vision Systems for Hardwood Logs and Lumber

    Treesearch

    Philip A. Araman; Tai-Hoon Cho; D. Zhu; R. Conners

    1991-01-01

    Computer vision systems being developed at Virginia Tech University with the support and cooperation from the U.S. Forest Service are presented. Researchers at Michigan State University, West Virginia University, and Mississippi State University are also members of the research team working on various parts of this research. Our goals are to help U.S. hardwood...

  14. Quantification of color vision using a tablet display.

    PubMed

    Chacon, Alicia; Rabin, Jeff; Yu, Dennis; Johnston, Shawn; Bradshaw, Timothy

    2015-01-01

    Accurate color vision is essential for optimal performance in aviation and space environments using nonredundant color coding to convey critical information. Most color tests detect color vision deficiency (CVD) but fail to diagnose type or severity of CVD, which are important to link performance to occupational demands. The computer-based Cone Contrast Test (CCT) diagnoses type and severity of CVD. It is displayed on a netbook computer for clinical application, but a more portable version may prove useful for deployments, space and aviation cockpits, as well as accident and sports medicine settings. Our purpose was to determine if the CCT can be conducted on a tablet display (Windows 8, Microsoft, Seattle, WA) using touch-screen response input. The CCT presents colored letters visible only to red (R), green (G), and blue (B) sensitive retinal cones to determine the lowest R, G, and B cone contrast visible to the observer. The CCT was measured in 16 color vision normals (CVN) and 16 CVDs using the standard netbook computer and a Windows 8 tablet display calibrated to produce equal color contrasts. Both displays showed 100% specificity for confirming CVN and 100% sensitivity for detecting CVD. In CVNs there was no difference between scores on netbook vs. tablet displays. G cone CVDs showed slightly lower G cone CCT scores on the tablet. CVD can be diagnosed with a tablet display. Ease-of-use, portability, and complete computer capabilities make tablets ideal for multiple settings, including aviation, space, military deployments, accidents and rescue missions, and sports vision. Chacon A, Rabin J, Yu D, Johnston S, Bradshaw T. Quantification of color vision using a tablet display.

  15. Forensic Odontology: Automatic Identification of Persons Comparing Antemortem and Postmortem Panoramic Radiographs Using Computer Vision.

    PubMed

    Heinrich, Andreas; Güttler, Felix; Wendt, Sebastian; Schenkl, Sebastian; Hubig, Michael; Wagner, Rebecca; Mall, Gita; Teichgräber, Ulf

    2018-06-18

     In forensic odontology the comparison between antemortem and postmortem panoramic radiographs (PRs) is a reliable method for person identification. The purpose of this study was to improve and automate identification of unknown people by comparison between antemortem and postmortem PR using computer vision.  The study includes 43 467 PRs from 24 545 patients (46 % females/54 % males). All PRs were filtered and evaluated with Matlab R2014b including the toolboxes image processing and computer vision system. The matching process used the SURF feature to find the corresponding points between two PRs (unknown person and database entry) out of the whole database.  From 40 randomly selected persons, 34 persons (85 %) could be reliably identified by corresponding PR matching points between an already existing scan in the database and the most recent PR. The systematic matching yielded a maximum of 259 points for a successful identification between two different PRs of the same person and a maximum of 12 corresponding matching points for other non-identical persons in the database. Hence 12 matching points are the threshold for reliable assignment.  Operating with an automatic PR system and computer vision could be a successful and reliable tool for identification purposes. The applied method distinguishes itself by virtue of its fast and reliable identification of persons by PR. This Identification method is suitable even if dental characteristics were removed or added in the past. The system seems to be robust for large amounts of data.   · Computer vision allows an automated antemortem and postmortem comparison of panoramic radiographs (PRs) for person identification.. · The present method is able to find identical matching partners among huge datasets (big data) in a short computing time.. · The identification method is suitable even if dental characteristics were removed or added.. · Heinrich A, Güttler F, Wendt S et al. Forensic Odontology: Automatic Identification of Persons Comparing Antemortem and Postmortem Panoramic Radiographs Using Computer Vision. Fortschr Röntgenstr 2018; DOI: 10.1055/a-0632-4744. © Georg Thieme Verlag KG Stuttgart · New York.

  16. Lack of international uniformity in assessing color vision deficiency in professional pilots.

    PubMed

    Watson, Dougal B

    2014-02-01

    Color is an important characteristic of the aviation environment. Pilots must rapidly and accurately differentiate and identify colors. The medical standards published by the International Civil Aviation Organization (ICAO) require that pilots have "the ability to perceive readily those colors the perception of which is necessary for the safe performance of duties." The general wording of that color vision (CV) standard, coupled with the associated flexibility provisions, allows for different approaches to the assessment of color vision deficient (CVD) pilots. Data was gathered and analyzed regarding medical assessment practices applied by different countries to CVD pilots. Data was obtained from 78 countries, representing 78% of the population and 92% of the aviation activity of the world. That data indicates wide variation in the medical assessment of CVD pilots. Countries use different tools and procedures for the testing of pilots, and also apply different result criteria to those tests. At one extreme an applicant making one error upon Ishihara 24-plate pseudoisochromatic plate (PIP) testing is declined a class 1 medical assessment, while at another extreme an applicant failing every color vision test required by the regulatory authority may be issued a medical assessment allowing commercial and airline copilot privileges. The medical assessment of CVD applicants is not performed consistently across the world. Factors that favor uniformity have been inadequate to encourage countries toward consistent medical assessment outcomes. This data is not consistent with the highest practicable degree of uniformity in medical assessment outcomes, and encourages aeromedical tourism.

  17. Testing alternative ground water models using cross-validation and other methods

    USGS Publications Warehouse

    Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.

    2007-01-01

    Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.

  18. Review On Applications Of Neural Network To Computer Vision

    NASA Astrophysics Data System (ADS)

    Li, Wei; Nasrabadi, Nasser M.

    1989-03-01

    Neural network models have many potential applications to computer vision due to their parallel structures, learnability, implicit representation of domain knowledge, fault tolerance, and ability of handling statistical data. This paper demonstrates the basic principles, typical models and their applications in this field. Variety of neural models, such as associative memory, multilayer back-propagation perceptron, self-stabilized adaptive resonance network, hierarchical structured neocognitron, high order correlator, network with gating control and other models, can be applied to visual signal recognition, reinforcement, recall, stereo vision, motion, object tracking and other vision processes. Most of the algorithms have been simulated on com-puters. Some have been implemented with special hardware. Some systems use features, such as edges and profiles, of images as the data form for input. Other systems use raw data as input signals to the networks. We will present some novel ideas contained in these approaches and provide a comparison of these methods. Some unsolved problems are mentioned, such as extracting the intrinsic properties of the input information, integrating those low level functions to a high-level cognitive system, achieving invariances and other problems. Perspectives of applications of some human vision models and neural network models are analyzed.

  19. Therapist-Assisted Rehabilitation of Visual Function and Hemianopia after Brain Injury: Intervention Study on the Effect of the Neuro Vision Technology Rehabilitation Program.

    PubMed

    Rasmussen, Rune Skovgaard; Schaarup, Anne Marie Heltoft; Overgaard, Karsten

    2018-02-27

    Serious and often lasting vision impairments affect 30% to 35% of people following stroke. Vision may be considered the most important sense in humans, and even smaller permanent injuries can drastically reduce quality of life. Restoration of visual field impairments occur only to a small extent during the first month after brain damage, and therefore the time window for spontaneous improvements is limited. One month after brain injury causing visual impairment, patients usually will experience chronically impaired vision and the need for compensatory vision rehabilitation is substantial. The purpose of this study is to investigate whether rehabilitation with Neuro Vision Technology will result in a significant and lasting improvement in functional capacity in persons with chronic visual impairments after brain injury. Improving eyesight is expected to increase both physical and mental functioning, thus improving the quality of life. This is a prospective open label trial in which participants with chronic visual field impairments are examined before and after the intervention. Participants typically suffer from stroke or traumatic brain injury and will be recruited from hospitals and The Institute for the Blind and Partially Sighted. Treatment is based on Neuro Vision Technology, which is a supervised training course, where participants are trained in compensatory techniques using specially designed equipment. Through the Neuro Vision Technology procedure, the vision problems of each individual are carefully investigated, and personal data is used to organize individual training sessions. Cognitive face-to-face assessments and self-assessed questionnaires about both life and vision quality are also applied before and after the training. Funding was provided in June 2017. Results are expected to be available in 2020. Sample size is calculated to 23 participants. Due to age, difficulty in transport, and the time-consuming intervention, up to 25% dropouts are expected; thus, we aim to include at least 29 participants. This investigation will evaluate the effects of Neuro Vision Technology therapy on compensatory vision rehabilitation. Additionally, quality of life and cognitive improvements associated to increased quality of life will be explored. ClinicalTrials.gov NCT03160131; https://clinicaltrials.gov/ct2/show/NCT03160131 (Archived by WebCite at http://www.webcitation.org/6x3f5HnCv). ©Rune Skovgaard Rasmussen, Anne Marie Heltoft Schaarup, Karsten Overgaard. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 27.02.2018.

  20. Computer vision-based classification of hand grip variations in neurorehabilitation.

    PubMed

    Zariffa, José; Steeves, John D

    2011-01-01

    The complexity of hand function is such that most existing upper limb rehabilitation robotic devices use only simplified hand interfaces. This is in contrast to the importance of the hand in regaining function after neurological injury. Computer vision technology has been used to identify hand posture in the field of Human Computer Interaction, but this approach has not been translated to the rehabilitation context. We describe a computer vision-based classifier that can be used to discriminate rehabilitation-relevant hand postures, and could be integrated into a virtual reality-based upper limb rehabilitation system. The proposed system was tested on a set of video recordings from able-bodied individuals performing cylindrical grasps, lateral key grips, and tip-to-tip pinches. The overall classification success rate was 91.2%, and was above 98% for 6 out of the 10 subjects. © 2011 IEEE

  1. Computing Optic Flow with ArduEye Vision Sensor

    DTIC Science & Technology

    2013-01-01

    processing algorithm that can be applied to the flight control of other robotic platforms. 15. SUBJECT TERMS Optical flow, ArduEye, vision based ...2 Figure 2. ArduEye vision chip on Stonyman breakout board connected to Arduino Mega (8) (left) and the Stonyman vision chips (7...robotic platforms. There is a significant need for small, light , less power-hungry sensors and sensory data processing algorithms in order to control the

  2. Insect vision as model for machine vision

    NASA Astrophysics Data System (ADS)

    Osorio, D.; Sobey, Peter J.

    1992-11-01

    The neural architecture, neurophysiology and behavioral abilities of insect vision are described, and compared with that of mammals. Insects have a hardwired neural architecture of highly differentiated neurons, quite different from the cerebral cortex, yet their behavioral abilities are in important respects similar to those of mammals. These observations challenge the view that the key to the power of biological neural computation is distributed processing by a plastic, highly interconnected, network of individually undifferentiated and unreliable neurons that has been a dominant picture of biological computation since Pitts and McCulloch's seminal work in the 1940's.

  3. Optimized feature-detection for on-board vision-based surveillance

    NASA Astrophysics Data System (ADS)

    Gond, Laetitia; Monnin, David; Schneider, Armin

    2012-06-01

    The detection and matching of robust features in images is an important step in many computer vision applications. In this paper, the importance of the keypoint detection algorithms and their inherent parameters in the particular context of an image-based change detection system for IED detection is studied. Through extensive application-oriented experiments, we draw an evaluation and comparison of the most popular feature detectors proposed by the computer vision community. We analyze how to automatically adjust these algorithms to changing imaging conditions and suggest improvements in order to achieve more exibility and robustness in their practical implementation.

  4. Accuracy, repeatability, and reproducibility of Artemis very high-frequency digital ultrasound arc-scan lateral dimension measurements

    PubMed Central

    Reinstein, Dan Z.; Archer, Timothy J.; Silverman, Ronald H.; Coleman, D. Jackson

    2008-01-01

    Purpose To determine the accuracy, repeatability, and reproducibility of measurement of lateral dimensions using the Artemis (Ultralink LLC) very high-frequency (VHF) digital ultrasound (US) arc scanner. Setting London Vision Clinic, London, United Kingdom. Methods A test object was measured first with a micrometer and then with the Artemis arc scanner. Five sets of 10 consecutive B-scans of the test object were performed with the scanner. The test object was removed from the system between each scan set. One expert observer and one newly trained observer separately measured the lateral dimension of the test object. Two-factor analysis of variance was performed. The accuracy was calculated as the average bias of the scan set averages. The repeatability and reproducibility coefficients were calculated. The coefficient of variation (CV) was calculated for repeatability and reproducibility. Results The test object was measured to be 10.80 mm wide. The mean lateral dimension bias was 0.00 mm. The repeatability coefficient was 0.114 mm. The reproducibility coefficient was 0.026 mm. The repeatability CV was 0.38%, and the reproducibility CV was 0.09%. There was no statistically significant variation between observers (P = .0965). There was a statistically significant variation between scan sets (P = .0036) attributed to minor vertical changes in the alignment of the test object between consecutive scan sets. Conclusion The Artemis VHF digital US arc scanner obtained accurate, repeatable, and reproducible measurements of lateral dimensions of the size commonly found in the anterior segment. PMID:17081860

  5. Differences in children and adolescents' ability of reporting two CVS-related visual problems.

    PubMed

    Hu, Liang; Yan, Zheng; Ye, Tiantian; Lu, Fan; Xu, Peng; Chen, Hao

    2013-01-01

    The present study examined whether children and adolescents can correctly report dry eyes and blurred distance vision, two visual problems associated with computer vision syndrome. Participants are 913 children and adolescents aged 6-17. They were asked to report their visual problems, including dry eyes and blurred distance vision, and received an eye examination, including tear film break-up time (TFBUT) and visual acuity (VA). Inconsistency was found between participants' reports of dry eyes and TFBUT results among all 913 participants as well as for all of four subgroups. In contrast, consistency was found between participants' reports of blurred distance vision and VA results among 873 participants who had never worn glasses as well as for the four subgroups. It was concluded that children and adolescents are unable to report dry eyes correctly; however, they are able to report blurred distance vision correctly. Three practical implications of the findings were discussed. Little is known about children's ability to report their visual problems, an issue critical to diagnosis and treatment of children's computer vision syndrome. This study compared children's self-reports and clinic examination results and found children can correctly report blurred distance vision but not dry eyes.

  6. Analysis of Global Properties of Shapes

    DTIC Science & Technology

    2010-06-01

    Conference on Computer Vision (ICCV) ( Bejing , China , 2005), IEEE. [113] Thrun, S., and Wegbreit, B. Shape from symmetry. In Proceedings of the...International Conference on Computer Vision (ICCV) ( Bejing , China , 2005), IEEE. [114] Toshev, A., Shi, J., and Daniilidis, K. Image matching via saliency...applications ranging from sampling points to finding correspondences to shape simplification. Discrete variants of the Laplace-Beltrami opera - tor [108] and

  7. The Development of a Robot-Based Learning Companion: A User-Centered Design Approach

    ERIC Educational Resources Information Center

    Hsieh, Yi-Zeng; Su, Mu-Chun; Chen, Sherry Y.; Chen, Gow-Dong

    2015-01-01

    A computer-vision-based method is widely employed to support the development of a variety of applications. In this vein, this study uses a computer-vision-based method to develop a playful learning system, which is a robot-based learning companion named RobotTell. Unlike existing playful learning systems, a user-centered design (UCD) approach is…

  8. Intraoral radiographs texture analysis for dental implant planning.

    PubMed

    Mundim, Mayara B V; Dias, Danilo R; Costa, Ronaldo M; Leles, Cláudio R; Azevedo-Marques, Paulo M; Ribeiro-Rotta, Rejane F

    2016-11-01

    Computer vision extracts features or attributes from images improving diagnosis accuracy and aiding in clinical decisions. This study aims to investigate the feasibility of using texture analysis of periapical radiograph images as a tool for dental implant treatment planning. Periapical radiograph images of 127 jawbone sites were obtained before and after implant placement. From the superimposition of the pre- and post-implant images, four regions of interest (ROI) were delineated on the pre-implant images for each implant site: mesial, distal and apical peri-implant areas and a central area. Each ROI was analysed using Matlab® software and seven image attributes were extracted: mean grey level (MGL), standard deviation of grey levels (SDGL), coefficient of variation (CV), entropy (En), contrast, correlation (Cor) and angular second moment (ASM). Images were grouped by bone types-Lekholm and Zarb classification (1,2,3,4). Peak insertion torque (PIT) and resonance frequency analysis (RFA) were recorded during implant placement. Differences among groups were tested for each image attribute. Agreement between measurements of the peri-implant ROIs and overall ROI (peri-implant + central area) was tested, as well as the association between primary stability measures (PIT and RFA) and texture attributes. Differences among bone type groups were found for MGL (p = 0.035), SDGL (p = 0.024), CV (p < 0.001) and En (p < 0.001). The apical ROI showed a significant difference from the other regions for all attributes, except Cor. Concordance correlation coefficients were all almost perfect (ρ > 0.93), except for ASM (ρ = 0.62). Texture attributes were significantly associated with the implant stability measures. Texture analysis of periapical radiographs may be a reliable non-invasive quantitative method for the assessment of jawbone and prediction of implant stability, with potential clinical applications. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  9. Monitoring system of multiple fire fighting based on computer vision

    NASA Astrophysics Data System (ADS)

    Li, Jinlong; Wang, Li; Gao, Xiaorong; Wang, Zeyong; Zhao, Quanke

    2010-10-01

    With the high demand of fire control in spacious buildings, computer vision is playing a more and more important role. This paper presents a new monitoring system of multiple fire fighting based on computer vision and color detection. This system can adjust to the fire position and then extinguish the fire by itself. In this paper, the system structure, working principle, fire orientation, hydrant's angle adjusting and system calibration are described in detail; also the design of relevant hardware and software is introduced. At the same time, the principle and process of color detection and image processing are given as well. The system runs well in the test, and it has high reliability, low cost, and easy nodeexpanding, which has a bright prospect of application and popularization.

  10. Benefit from NASA

    NASA Image and Video Library

    1985-01-01

    The NASA imaging processing technology, an advanced computer technique to enhance images sent to Earth in digital form by distant spacecraft, helped develop a new vision screening process. The Ocular Vision Screening system, an important step in preventing vision impairment, is a portable device designed especially to detect eye problems in children through the analysis of retinal reflexes.

  11. Secure Computer Systems: Extensions to the Bell-La Padula Model

    DTIC Science & Technology

    2009-01-01

    countable and n CX ℜ∈ ; V is a finite collection of input variables. We assume ( )CD VVV ∪= with DV countable and nCV ℜ∈ ; XInit ⊆ is a set of...assume ( )CD VVV ∪= with DV countable and nCV ℜ∈ ; XInit ⊆ is a set of initial states; CXVXf →×: is a vector field, assumed to be globally...built under the Eclipse Swordfish project. As indicated on the project web site,”The goal of the Swordfish project is to provide an extensible SOA

  12. Choroideremia

    MedlinePlus

    ... in Your Area Stories of Hope Videos Resources Low Vision Specialists Retinal Physicians My Retina Tracker Registry Genetic ... a treatment is discovered, help is available through low-vision aids, including optical, electronic, and computer-based devices. ...

  13. A self-learning camera for the validation of highly variable and pseudorandom patterns

    NASA Astrophysics Data System (ADS)

    Kelley, Michael

    2004-05-01

    Reliable and productive manufacturing operations have depended on people to quickly detect and solve problems whenever they appear. Over the last 20 years, more and more manufacturing operations have embraced machine vision systems to increase productivity, reliability and cost-effectiveness, including reducing the number of human operators required. Although machine vision technology has long been capable of solving simple problems, it has still not been broadly implemented. The reason is that until now, no machine vision system has been designed to meet the unique demands of complicated pattern recognition. The ZiCAM family was specifically developed to be the first practical hardware to meet these needs. To be able to address non-traditional applications, the machine vision industry must include smart camera technology that meets its users" demands for lower costs, better performance and the ability to address applications of irregular lighting, patterns and color. The next-generation smart cameras will need to evolve as a fundamentally different kind of sensor, with new technology that behaves like a human but performs like a computer. Neural network based systems, coupled with self-taught, n-space, non-linear modeling, promises to be the enabler of the next generation of machine vision equipment. Image processing technology is now available that enables a system to match an operator"s subjectivity. A Zero-Instruction-Set-Computer (ZISC) powered smart camera allows high-speed fuzzy-logic processing, without the need for computer programming. This can address applications of validating highly variable and pseudo-random patterns. A hardware-based implementation of a neural network, Zero-Instruction-Set-Computer, enables a vision system to "think" and "inspect" like a human, with the speed and reliability of a machine.

  14. Eyesight quality and Computer Vision Syndrome.

    PubMed

    Bogdănici, Camelia Margareta; Săndulache, Diana Elena; Nechita, Corina Andreea

    2017-01-01

    The aim of the study was to analyze the effects that gadgets have on eyesight quality. A prospective observational study was conducted from January to July 2016, on 60 people who were divided into two groups: Group 1 - 30 middle school pupils with a mean age of 11.9 ± 1.86 and Group 2 - 30 patients evaluated in the Ophthalmology Clinic, "Sf. Spiridon" Hospital, Iași, with a mean age of 21.36 ± 7.16 years. The clinical parameters observed were the following: visual acuity (VA), objective refraction, binocular vision (BV), fusional amplitude (FA), Schirmer's test. A questionnaire was also distributed, which contained 8 questions that highlighted the gadget's impact on the eyesight. The use of different gadgets, such as computer, laptops, mobile phones or other displays become part of our everyday life and people experience a variety of ocular symptoms or vision problems related to these. Computer Vision Syndrome (CVS) represents a group of visual and extraocular symptoms associated with sustained use of visual display terminals. Headache, blurred vision, and ocular congestion are the most frequent manifestations determined by the long time use of gadgets. Mobile phones and laptops are the most frequently used gadgets. People who use gadgets for a long time have a sustained effort for accommodation. A small amount of refractive errors (especially myopic shift) was objectively recorded by various studies on near work. Dry eye syndrome could also be identified, and an improvement of visual comfort could be observed after the instillation of artificial tears drops. Computer Vision Syndrome is still under-diagnosed, and people should be made aware of the bad effects the prolonged use of gadgets has on eyesight.

  15. Eyesight quality and Computer Vision Syndrome

    PubMed Central

    Bogdănici, Camelia Margareta; Săndulache, Diana Elena; Nechita, Corina Andreea

    2017-01-01

    The aim of the study was to analyze the effects that gadgets have on eyesight quality. A prospective observational study was conducted from January to July 2016, on 60 people who were divided into two groups: Group 1 – 30 middle school pupils with a mean age of 11.9 ± 1.86 and Group 2 – 30 patients evaluated in the Ophthalmology Clinic, “Sf. Spiridon” Hospital, Iași, with a mean age of 21.36 ± 7.16 years. The clinical parameters observed were the following: visual acuity (VA), objective refraction, binocular vision (BV), fusional amplitude (FA), Schirmer’s test. A questionnaire was also distributed, which contained 8 questions that highlighted the gadget’s impact on the eyesight. The use of different gadgets, such as computer, laptops, mobile phones or other displays become part of our everyday life and people experience a variety of ocular symptoms or vision problems related to these. Computer Vision Syndrome (CVS) represents a group of visual and extraocular symptoms associated with sustained use of visual display terminals. Headache, blurred vision, and ocular congestion are the most frequent manifestations determined by the long time use of gadgets. Mobile phones and laptops are the most frequently used gadgets. People who use gadgets for a long time have a sustained effort for accommodation. A small amount of refractive errors (especially myopic shift) was objectively recorded by various studies on near work. Dry eye syndrome could also be identified, and an improvement of visual comfort could be observed after the instillation of artificial tears drops. Computer Vision Syndrome is still under-diagnosed, and people should be made aware of the bad effects the prolonged use of gadgets has on eyesight. PMID:29450383

  16. Audible vision for the blind and visually impaired in indoor open spaces.

    PubMed

    Yu, Xunyi; Ganz, Aura

    2012-01-01

    In this paper we introduce Audible Vision, a system that can help blind and visually impaired users navigate in large indoor open spaces. The system uses computer vision to estimate the location and orientation of the user, and enables the user to perceive his/her relative position to a landmark through 3D audio. Testing shows that Audible Vision can work reliably in real-life ever-changing environment crowded with people.

  17. Characterizing Accuracy and Precision of Glucose Sensors and Meters

    PubMed Central

    2014-01-01

    There is need for a method to describe precision and accuracy of glucose measurement as a smooth continuous function of glucose level rather than as a step function for a few discrete ranges of glucose. We propose and illustrate a method to generate a “Glucose Precision Profile” showing absolute relative deviation (ARD) and /or %CV versus glucose level to better characterize measurement errors at any glucose level. We examine the relationship between glucose measured by test and comparator methods using linear regression. We examine bias by plotting deviation = (test – comparator method) versus glucose level. We compute the deviation, absolute deviation (AD), ARD, and standard deviation (SD) for each data pair. We utilize curve smoothing procedures to minimize the effects of random sampling variability to facilitate identification and display of the underlying relationships between ARD or %CV and glucose level. AD, ARD, SD, and %CV display smooth continuous relationships versus glucose level. Estimates of MARD and %CV are subject to relatively large errors in the hypoglycemic range due in part to a markedly nonlinear relationship with glucose level and in part to the limited number of observations in the hypoglycemic range. The curvilinear relationships of ARD and %CV versus glucose level are helpful when characterizing and comparing the precision and accuracy of glucose sensors and meters. PMID:25037194

  18. The cross-validated AUC for MCP-logistic regression with high-dimensional data.

    PubMed

    Jiang, Dingfeng; Huang, Jian; Zhang, Ying

    2013-10-01

    We propose a cross-validated area under the receiving operator characteristic (ROC) curve (CV-AUC) criterion for tuning parameter selection for penalized methods in sparse, high-dimensional logistic regression models. We use this criterion in combination with the minimax concave penalty (MCP) method for variable selection. The CV-AUC criterion is specifically designed for optimizing the classification performance for binary outcome data. To implement the proposed approach, we derive an efficient coordinate descent algorithm to compute the MCP-logistic regression solution surface. Simulation studies are conducted to evaluate the finite sample performance of the proposed method and its comparison with the existing methods including the Akaike information criterion (AIC), Bayesian information criterion (BIC) or Extended BIC (EBIC). The model selected based on the CV-AUC criterion tends to have a larger predictive AUC and smaller classification error than those with tuning parameters selected using the AIC, BIC or EBIC. We illustrate the application of the MCP-logistic regression with the CV-AUC criterion on three microarray datasets from the studies that attempt to identify genes related to cancers. Our simulation studies and data examples demonstrate that the CV-AUC is an attractive method for tuning parameter selection for penalized methods in high-dimensional logistic regression models.

  19. An improved three-dimension reconstruction method based on guided filter and Delaunay

    NASA Astrophysics Data System (ADS)

    Liu, Yilin; Su, Xiu; Liang, Haitao; Xu, Huaiyuan; Wang, Yi; Chen, Xiaodong

    2018-01-01

    Binocular stereo vision is becoming a research hotspot in the area of image processing. Based on traditional adaptive-weight stereo matching algorithm, we improve the cost volume by averaging the AD (Absolute Difference) of RGB color channels and adding x-derivative of the grayscale image to get the cost volume. Then we use guided filter in the cost aggregation step and weighted median filter for post-processing to address the edge problem. In order to get the location in real space, we combine the deep information with the camera calibration to project each pixel in 2D image to 3D coordinate matrix. We add the concept of projection to region-growing algorithm for surface reconstruction, its specific operation is to project all the points to a 2D plane through the normals of clouds and return the results back to 3D space according to these connection relationship among the points in 2D plane. During the triangulation in 2D plane, we use Delaunay algorithm because it has optimal quality of mesh. We configure OpenCV and pcl on Visual Studio for testing, and the experimental results show that the proposed algorithm have higher computational accuracy of disparity and can realize the details of the real mesh model.

  20. [The Performance Analysis for Lighting Sources in Highway Tunnel Based on Visual Function].

    PubMed

    Yang, Yong; Han, Wen-yuan; Yan, Ming; Jiang, Hai-feng; Zhu, Li-wei

    2015-10-01

    Under the condition of mesopic vision, the spectral luminous efficiency function is shown as a series of curves. Its peak wavelength and intensity are affected by light spectrum, background brightness and other aspects. The impact of light source to lighting visibility could not be carried out via a single optical parametric characterization. The reaction time of visual cognition is regard as evaluating indexes in this experiment. Under the condition of different speed and luminous environment, testing visual cognition based on vision function method. The light sources include high pressure sodium, electrodeless fluorescent lamp and white LED with three kinds of color temperature (the range of color temperature is from 1 958 to 5 537 K). The background brightness value is used for basic section of highway tunnel illumination and general outdoor illumination, its range is between 1 and 5 cd x m(-)2. All values are in the scope of mesopic vision. Test results show that: under the same condition of speed and luminance, the reaction time of visual cognition that corresponding to high color temperature of light source is shorter than it corresponding to low color temperature; the reaction time corresponding to visual target in high speed is shorter than it in low speed. At the end moment, however, the visual angle of target in observer's visual field that corresponding to low speed was larger than it corresponding to high speed. Based on MOVE model, calculating the equivalent luminance of human mesopic vision, which is on condition of different emission spectrum and background brightness that formed by test lighting sources. Compared with photopic vision result, the standard deviation (CV) of time-reaction curve corresponding to equivalent brightness of mesopic vision is smaller. Under the condition of mesopic vision, the discrepancy between equivalent brightness of different lighting source and photopic vision, that is one of the main reasons for causing the discrepancy of visual recognition. The emission spectrum peak of GaN chip is approximate to the wave length peak of efficiency function in photopic vision. The lighting visual effect of write LED in high color temperature is better than it in low color temperature and electrodeless fluorescent lamp. The lighting visual effect of high pressure sodium is weak. Because of its peak value is around the Na+ characteristic spectra.

  1. What Is Low Vision?

    MedlinePlus

    ... magnifying reading glasses or loupes for seeing the computer screen , sheet music, or for sewing telescopic glasses ... for the Blind services. The Low Vision Pilot Project The American Foundation for the Blind (AFB) has ...

  2. Development of a Vision-Based Situational Awareness Capability for Unmanned Surface Vessels

    DTIC Science & Technology

    2017-09-01

    used to provide an SA capability for USVs. This thesis addresses the following research questions: (1) Can a computer vision– based technique be...BLANK 51 VI. CONCLUSION AND RECOMMENDATIONS A. CONCLUSION This research demonstrated the feasibility of using a computer vision– based ...VISION- BASED SITUATIONAL AWARENESS CAPABILITY FOR UNMANNED SURFACE VESSELS by Ying Jie Benjemin Toh September 2017 Thesis Advisor: Oleg

  3. Remote sensing of vegetation structure using computer vision

    NASA Astrophysics Data System (ADS)

    Dandois, Jonathan P.

    High-spatial resolution measurements of vegetation structure are needed for improving understanding of ecosystem carbon, water and nutrient dynamics, the response of ecosystems to a changing climate, and for biodiversity mapping and conservation, among many research areas. Our ability to make such measurements has been greatly enhanced by continuing developments in remote sensing technology---allowing researchers the ability to measure numerous forest traits at varying spatial and temporal scales and over large spatial extents with minimal to no field work, which is costly for large spatial areas or logistically difficult in some locations. Despite these advances, there remain several research challenges related to the methods by which three-dimensional (3D) and spectral datasets are joined (remote sensing fusion) and the availability and portability of systems for frequent data collections at small scale sampling locations. Recent advances in the areas of computer vision structure from motion (SFM) and consumer unmanned aerial systems (UAS) offer the potential to address these challenges by enabling repeatable measurements of vegetation structural and spectral traits at the scale of individual trees. However, the potential advances offered by computer vision remote sensing also present unique challenges and questions that need to be addressed before this approach can be used to improve understanding of forest ecosystems. For computer vision remote sensing to be a valuable tool for studying forests, bounding information about the characteristics of the data produced by the system will help researchers understand and interpret results in the context of the forest being studied and of other remote sensing techniques. This research advances understanding of how forest canopy and tree 3D structure and color are accurately measured by a relatively low-cost and portable computer vision personal remote sensing system: 'Ecosynth'. Recommendations are made for optimal conditions under which forest structure measurements should be obtained with UAS-SFM remote sensing. Ultimately remote sensing of vegetation by computer vision offers the potential to provide an 'ecologist's eye view', capturing not only canopy 3D and spectral properties, but also seeing the trees in the forest and the leaves on the trees.

  4. Design And Implementation Of Integrated Vision-Based Robotic Workcells

    NASA Astrophysics Data System (ADS)

    Chen, Michael J.

    1985-01-01

    Reports have been sparse on large-scale, intelligent integration of complete robotic systems for automating the microelectronics industry. This paper describes the application of state-of-the-art computer-vision technology for manufacturing of miniaturized electronic components. The concepts of FMS - Flexible Manufacturing Systems, work cells, and work stations and their control hierarchy are illustrated in this paper. Several computer-controlled work cells used in the production of thin-film magnetic heads are described. These cells use vision for in-process control of head-fixture alignment and real-time inspection of production parameters. The vision sensor and other optoelectronic sensors, coupled with transport mechanisms such as steppers, x-y-z tables, and robots, have created complete sensorimotor systems. These systems greatly increase the manufacturing throughput as well as the quality of the final product. This paper uses these automated work cells as examples to exemplify the underlying design philosophy and principles in the fabrication of vision-based robotic systems.

  5. Illumination-based synchronization of high-speed vision sensors.

    PubMed

    Hou, Lei; Kagami, Shingo; Hashimoto, Koichi

    2010-01-01

    To acquire images of dynamic scenes from multiple points of view simultaneously, the acquisition time of vision sensors should be synchronized. This paper describes an illumination-based synchronization method derived from the phase-locked loop (PLL) algorithm. Incident light to a vision sensor from an intensity-modulated illumination source serves as the reference signal for synchronization. Analog and digital computation within the vision sensor forms a PLL to regulate the output signal, which corresponds to the vision frame timing, to be synchronized with the reference. Simulated and experimental results show that a 1,000 Hz frame rate vision sensor was successfully synchronized with 32 μs jitters.

  6. Evaluation of the combined use of metronomic zoledronic acid and Coriolus versicolor in intratibial breast cancer mouse model.

    PubMed

    Ko, Chun-Hay; Yue, Grace Gar-Lee; Gao, Si; Luo, Ke-Wang; Siu, Wing-Sum; Shum, Wai-Ting; Shiu, Hoi-Ting; Lee, Julia Kin-Ming; Li, Gang; Leung, Ping-Chung; Evdokiou, Andreas; Lau, Clara Bik-San

    2017-05-23

    Coriolus versicolor (CV) is a mushroom traditionally used for strengthening the immune system and nowadays used as immunomodulatory adjuvant in anticancer therapy. Breast cancer usually metastasizes to the skeleton, interrupts the normal bone remodeling process and causes osteolytic bone lesions. The aims of the present study were to evaluate its herb-drug interaction with metronomic zoledronate in preventing cancer propagation, metastasis and bone destruction. Mice inoculated with human breast cancer cells tagged with a luciferase (MDA-MB-231-TXSA) in tibia were treated with CV aqueous extract, mZOL, or the combination of both for 4 weeks. Alteration of the luciferase signals in tibia, liver and lung were quantified using the IVIS imaging system. The skeletal response was evaluated using micro-computed tomography (micro-CT). In vitro experiments were carried out to confirm the in vivo findings. Results showed that combination of CV and mZOL diminished tumor growth without increasing the incidence of lung and liver metastasis in intratibial breast tumor model. The combination therapy also reserved the integrity of bones. In vitro studies demonstrated that combined use of CV and mZOL inhibited cancer cell proliferation and osteoclastogenesis. These findings suggested that combination treatment of CV and mZOL attenuated breast tumor propagation, protected against osteolytic bone lesion without significant metastases. This study provides scientific evidences on the beneficial outcome of using CV together with mZOL in the management of breast cancer and metastasis, which may lead to the development of CV as adjuvant health supplement for the control of breast cancer. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  7. Image Understanding Architecture

    DTIC Science & Technology

    1991-09-01

    architecture to support real-time, knowledge -based image understanding , and develop the software support environment that will be needed to utilize...NUMBER OF PAGES Image Understanding Architecture, Knowledge -Based Vision, AI Real-Time Computer Vision, Software Simulator, Parallel Processor IL PRICE... information . In addition to sensory and knowledge -based processing it is useful to introduce a level of symbolic processing. Thus, vision researchers

  8. Video image processing

    NASA Technical Reports Server (NTRS)

    Murray, N. D.

    1985-01-01

    Current technology projections indicate a lack of availability of special purpose computing for Space Station applications. Potential functions for video image special purpose processing are being investigated, such as smoothing, enhancement, restoration and filtering, data compression, feature extraction, object detection and identification, pixel interpolation/extrapolation, spectral estimation and factorization, and vision synthesis. Also, architectural approaches are being identified and a conceptual design generated. Computationally simple algorithms will be research and their image/vision effectiveness determined. Suitable algorithms will be implimented into an overall architectural approach that will provide image/vision processing at video rates that are flexible, selectable, and programmable. Information is given in the form of charts, diagrams and outlines.

  9. The relationship between duration of psoriasis, vascular inflammation, and cardiovascular events.

    PubMed

    Egeberg, Alexander; Skov, Lone; Joshi, Aditya A; Mallbris, Lotus; Gislason, Gunnar H; Wu, Jashin J; Rodante, Justin; Lerman, Joseph B; Ahlman, Mark A; Gelfand, Joel M; Mehta, Nehal N

    2017-10-01

    Psoriasis is associated with risk of cardiovascular (CV) disease (CVD) and a major adverse CV event (MACE). Whether psoriasis duration affects risk of vascular inflammation and MACEs has not been well characterized. We utilized two resources to understand the effect of psoriasis duration on vascular disease and CV events: (1) a human imaging study and (2) a population-based study of CVD events. First, patients with psoriasis (N = 190) underwent fludeoxyglucose F 18 positron emission tomography/computed tomography (duration effect reported as a β-coefficient). Second, MACE risk was examined by using nationwide registries (adjusted hazard ratios in patients with psoriasis (n = 87,161) versus the general population (n = 4,234,793). In the human imaging study, patients were young, of low CV risk by traditional risk scores, and had a high prevalence of cardiometabolic diseases. Vascular inflammation by fludeoxyglucose F 18 positron emission tomography/computed tomography was significantly associated with disease duration (β = 0.171, P = .002). In the population-based study, psoriasis duration had strong relationship with MACE risk (1.0% per additional year of psoriasis duration [hazard ratio, 1.010; 95% confidence interval, 1.007-1.013]). These studies utilized observational data. We found detrimental effects of psoriasis duration on vascular inflammation and MACE, suggesting that cumulative duration of exposure to low-grade chronic inflammation may accelerate vascular disease development and MACEs. Providers should consider inquiring about duration of disease to counsel for heightened CVD risk in psoriasis. Copyright © 2017 American Academy of Dermatology, Inc. All rights reserved.

  10. Low Vision Aids and Low Vision Rehabilitation

    MedlinePlus

    ... SeeingAI), magnify, or illuminate. Another app, EyeNote, is free for Apple products. It scans and identifies the denomination of U.S. paper money. Computers that can read aloud or magnify what ...

  11. Supporting Real-Time Computer Vision Workloads using OpenVX on Multicore+GPU Platforms

    DTIC Science & Technology

    2015-05-01

    a registered trademark of the NVIDIA Corporation . Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection...from NVIDIA , we adapted an alpha- version of an NVIDIA OpenVX implementation called VisionWorks® [3] to run atop PGMRT (a graph-based mid- dleware...time support to an OpenVX implementation by NVIDIA called VisionWorks. Our modifications were applied to an alpha-version of VisionWorks. This alpha

  12. Vision Based Autonomous Robotic Control for Advanced Inspection and Repair

    NASA Technical Reports Server (NTRS)

    Wehner, Walter S.

    2014-01-01

    The advanced inspection system is an autonomous control and analysis system that improves the inspection and remediation operations for ground and surface systems. It uses optical imaging technology with intelligent computer vision algorithms to analyze physical features of the real-world environment to make decisions and learn from experience. The advanced inspection system plans to control a robotic manipulator arm, an unmanned ground vehicle and cameras remotely, automatically and autonomously. There are many computer vision, image processing and machine learning techniques available as open source for using vision as a sensory feedback in decision-making and autonomous robotic movement. My responsibilities for the advanced inspection system are to create a software architecture that integrates and provides a framework for all the different subsystem components; identify open-source algorithms and techniques; and integrate robot hardware.

  13. Near real-time stereo vision system

    NASA Technical Reports Server (NTRS)

    Anderson, Charles H. (Inventor); Matthies, Larry H. (Inventor)

    1993-01-01

    The apparatus for a near real-time stereo vision system for use with a robotic vehicle is described. The system is comprised of two cameras mounted on three-axis rotation platforms, image-processing boards, a CPU, and specialized stereo vision algorithms. Bandpass-filtered image pyramids are computed, stereo matching is performed by least-squares correlation, and confidence ranges are estimated by means of Bayes' theorem. In particular, Laplacian image pyramids are built and disparity maps are produced from the 60 x 64 level of the pyramids at rates of up to 2 seconds per image pair. The first autonomous cross-country robotic traverses (of up to 100 meters) have been achieved using the stereo vision system of the present invention with all computing done onboard the vehicle. The overall approach disclosed herein provides a unifying paradigm for practical domain-independent stereo ranging.

  14. Overview of sports vision

    NASA Astrophysics Data System (ADS)

    Moore, Linda A.; Ferreira, Jannie T.

    2003-03-01

    Sports vision encompasses the visual assessment and provision of sports-specific visual performance enhancement and ocular protection for athletes of all ages, genders and levels of participation. In recent years, sports vision has been identified as one of the key performance indicators in sport. It is built on four main cornerstones: corrective eyewear, protective eyewear, visual skills enhancement and performance enhancement. Although clinically well established in the US, it is still a relatively new area of optometric specialisation elsewhere in the world and is gaining increasing popularity with eyecare practitioners and researchers. This research is often multi-disciplinary and involves input from a variety of subject disciplines, mainly those of optometry, medicine, physiology, psychology, physics, chemistry, computer science and engineering. Collaborative research projects are currently underway between staff of the Schools of Physics and Computing (DIT) and the Academy of Sports Vision (RAU).

  15. Database Integrity Monitoring for Synthetic Vision Systems Using Machine Vision and SHADE

    NASA Technical Reports Server (NTRS)

    Cooper, Eric G.; Young, Steven D.

    2005-01-01

    In an effort to increase situational awareness, the aviation industry is investigating technologies that allow pilots to visualize what is outside of the aircraft during periods of low-visibility. One of these technologies, referred to as Synthetic Vision Systems (SVS), provides the pilot with real-time computer-generated images of obstacles, terrain features, runways, and other aircraft regardless of weather conditions. To help ensure the integrity of such systems, methods of verifying the accuracy of synthetically-derived display elements using onboard remote sensing technologies are under investigation. One such method is based on a shadow detection and extraction (SHADE) algorithm that transforms computer-generated digital elevation data into a reference domain that enables direct comparison with radar measurements. This paper describes machine vision techniques for making this comparison and discusses preliminary results from application to actual flight data.

  16. From Survey to FEM Analysis for Documentation of Built Heritage: the Case Study of Villa Revedin-Bolasco

    NASA Astrophysics Data System (ADS)

    Guarnieri, A.; Fissore, F.; Masiero, A.; Di Donna, A.; Coppa, U.; Vettore, A.

    2017-05-01

    In the last decade advances in the fields of close-range photogrammetry, terrestrial laser scanning (TLS) and computer vision (CV) have enabled to collect different kind of information about a Cultural Heritage objects and to carry out highly accurate 3D models. Additionally, the integration between laser scanning technology and Finite Element Analysis (FEA) is gaining particular interest in recent years for structural analysis of built heritage, since the increasing computational capabilities allow to manipulate large datasets. In this note we illustrate the approach adopted for surveying, 3D modeling and structural analysis of Villa Revedin-Bolasco, a magnificent historical building located in the small walled town of Castelfranco Veneto, in northern Italy. In 2012 CIRGEO was charged by the University of Padova to carry out a survey of the Villa and Park, as preliminary step for subsequent restoration works. The inner geometry of the Villa was captured with two Leica Disto D3a BT hand-held laser meters, while the outer walls of the building were surveyed with a Leica C10 and a Faro Focus 3D 120 terrestrial laser scanners. Ancillary GNSS measurements were also collected for 3D laser model georeferencing. A solid model was then generated from the laser global point cloud in Rhinoceros software, and portion of it was used for simulation in a Finite Element Analysis (FEA). In the paper we discuss in detail all the steps and challenges addressed and solutions adopted concerning the survey, solid modeling and FEA from laser scanning data of the historical complex of Villa Revedin-Bolasco.

  17. Virtual Vision

    NASA Astrophysics Data System (ADS)

    Terzopoulos, Demetri; Qureshi, Faisal Z.

    Computer vision and sensor networks researchers are increasingly motivated to investigate complex multi-camera sensing and control issues that arise in the automatic visual surveillance of extensive, highly populated public spaces such as airports and train stations. However, they often encounter serious impediments to deploying and experimenting with large-scale physical camera networks in such real-world environments. We propose an alternative approach called "Virtual Vision", which facilitates this type of research through the virtual reality simulation of populated urban spaces, camera sensor networks, and computer vision on commodity computers. We demonstrate the usefulness of our approach by developing two highly automated surveillance systems comprising passive and active pan/tilt/zoom cameras that are deployed in a virtual train station environment populated by autonomous, lifelike virtual pedestrians. The easily reconfigurable virtual cameras distributed in this environment generate synthetic video feeds that emulate those acquired by real surveillance cameras monitoring public spaces. The novel multi-camera control strategies that we describe enable the cameras to collaborate in persistently observing pedestrians of interest and in acquiring close-up videos of pedestrians in designated areas.

  18. Development of a body motion interactive system with a weight voting mechanism and computer vision technology

    NASA Astrophysics Data System (ADS)

    Lin, Chern-Sheng; Chen, Chia-Tse; Shei, Hung-Jung; Lay, Yun-Long; Chiu, Chuang-Chien

    2012-09-01

    This study develops a body motion interactive system with computer vision technology. This application combines interactive games, art performing, and exercise training system. Multiple image processing and computer vision technologies are used in this study. The system can calculate the characteristics of an object color, and then perform color segmentation. When there is a wrong action judgment, the system will avoid the error with a weight voting mechanism, which can set the condition score and weight value for the action judgment, and choose the best action judgment from the weight voting mechanism. Finally, this study estimated the reliability of the system in order to make improvements. The results showed that, this method has good effect on accuracy and stability during operations of the human-machine interface of the sports training system.

  19. Computer Vision and Machine Learning for Autonomous Characterization of AM Powder Feedstocks

    NASA Astrophysics Data System (ADS)

    DeCost, Brian L.; Jain, Harshvardhan; Rollett, Anthony D.; Holm, Elizabeth A.

    2017-03-01

    By applying computer vision and machine learning methods, we develop a system to characterize powder feedstock materials for metal additive manufacturing (AM). Feature detection and description algorithms are applied to create a microstructural scale image representation that can be used to cluster, compare, and analyze powder micrographs. When applied to eight commercial feedstock powders, the system classifies powder images into the correct material systems with greater than 95% accuracy. The system also identifies both representative and atypical powder images. These results suggest the possibility of measuring variations in powders as a function of processing history, relating microstructural features of powders to properties relevant to their performance in AM processes, and defining objective material standards based on visual images. A significant advantage of the computer vision approach is that it is autonomous, objective, and repeatable.

  20. CFD Vision 2030 Study: A Path to Revolutionary Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Slotnick, Jeffrey; Khodadoust, Abdollah; Alonso, Juan; Darmofal, David; Gropp, William; Lurie, Elizabeth; Mavriplis, Dimitri

    2014-01-01

    This report documents the results of a study to address the long range, strategic planning required by NASA's Revolutionary Computational Aerosciences (RCA) program in the area of computational fluid dynamics (CFD), including future software and hardware requirements for High Performance Computing (HPC). Specifically, the "Vision 2030" CFD study is to provide a knowledge-based forecast of the future computational capabilities required for turbulent, transitional, and reacting flow simulations across a broad Mach number regime, and to lay the foundation for the development of a future framework and/or environment where physics-based, accurate predictions of complex turbulent flows, including flow separation, can be accomplished routinely and efficiently in cooperation with other physics-based simulations to enable multi-physics analysis and design. Specific technical requirements from the aerospace industrial and scientific communities were obtained to determine critical capability gaps, anticipated technical challenges, and impediments to achieving the target CFD capability in 2030. A preliminary development plan and roadmap were created to help focus investments in technology development to help achieve the CFD vision in 2030.

  1. Remote media vision-based computer input device

    NASA Astrophysics Data System (ADS)

    Arabnia, Hamid R.; Chen, Ching-Yi

    1991-11-01

    In this paper, we introduce a vision-based computer input device which has been built at the University of Georgia. The user of this system gives commands to the computer without touching any physical device. The system receives input through a CCD camera; it is PC- based and is built on top of the DOS operating system. The major components of the input device are: a monitor, an image capturing board, a CCD camera, and some software (developed by use). These are interfaced with a standard PC running under the DOS operating system.

  2. The vertical monitor position for presbyopic computer users with progressive lenses: how to reach clear vision and comfortable head posture.

    PubMed

    Weidling, Patrick; Jaschinski, Wolfgang

    2015-01-01

    When presbyopic employees are wearing general-purpose progressive lenses, they have clear vision only with a lower gaze inclination to the computer monitor, given the head assumes a comfortable inclination. Therefore, in the present intervention field study the monitor position was lowered, also with the aim to reduce musculoskeletal symptoms. A comparison group comprised users of lenses that do not restrict the field of clear vision. The lower monitor positions led the participants to lower their head inclination, which was linearly associated with a significant reduction in musculoskeletal symptoms. However, for progressive lenses a lower head inclination means a lower zone of clear vision, so that clear vision of the complete monitor was not achieved, rather the monitor should have been placed even lower. The procedures of this study may be useful for optimising the individual monitor position depending on the comfortable head and gaze inclination and the vertical zone of clear vision of progressive lenses. For users of general-purpose progressive lenses, it is suggested that low monitor positions allow for clear vision at the monitor and for a physiologically favourable head inclination. Employees may improve their workplace using a flyer providing ergonomic-optometric information.

  3. Using noun phrases for navigating biomedical literature on Pubmed: how many updates are we losing track of?

    PubMed

    Srikrishna, Devabhaktuni; Coram, Marc A

    2011-01-01

    Author-supplied citations are a fraction of the related literature for a paper. The "related citations" on PubMed is typically dozens or hundreds of results long, and does not offer hints why these results are related. Using noun phrases derived from the sentences of the paper, we show it is possible to more transparently navigate to PubMed updates through search terms that can associate a paper with its citations. The algorithm to generate these search terms involved automatically extracting noun phrases from the paper using natural language processing tools, and ranking them by the number of occurrences in the paper compared to the number of occurrences on the web. We define search queries having at least one instance of overlap between the author-supplied citations of the paper and the top 20 search results as citation validated (CV). When the overlapping citations were written by same authors as the paper itself, we define it as CV-S and different authors is defined as CV-D. For a systematic sample of 883 papers on PubMed Central, at least one of the search terms for 86% of the papers is CV-D versus 65% for the top 20 PubMed "related citations." We hypothesize these quantities computed for the 20 million papers on PubMed to differ within 5% of these percentages. Averaged across all 883 papers, 5 search terms are CV-D, and 10 search terms are CV-S, and 6 unique citations validate these searches. Potentially related literature uncovered by citation-validated searches (either CV-S or CV-D) are on the order of ten per paper--many more if the remaining searches that are not citation-validated are taken into account. The significance and relationship of each search result to the paper can only be vetted and explained by a researcher with knowledge of or interest in that paper.

  4. Using Noun Phrases for Navigating Biomedical Literature on Pubmed: How Many Updates Are We Losing Track of?

    PubMed Central

    Srikrishna, Devabhaktuni; Coram, Marc A.

    2011-01-01

    Author-supplied citations are a fraction of the related literature for a paper. The “related citations” on PubMed is typically dozens or hundreds of results long, and does not offer hints why these results are related. Using noun phrases derived from the sentences of the paper, we show it is possible to more transparently navigate to PubMed updates through search terms that can associate a paper with its citations. The algorithm to generate these search terms involved automatically extracting noun phrases from the paper using natural language processing tools, and ranking them by the number of occurrences in the paper compared to the number of occurrences on the web. We define search queries having at least one instance of overlap between the author-supplied citations of the paper and the top 20 search results as citation validated (CV). When the overlapping citations were written by same authors as the paper itself, we define it as CV-S and different authors is defined as CV-D. For a systematic sample of 883 papers on PubMed Central, at least one of the search terms for 86% of the papers is CV-D versus 65% for the top 20 PubMed “related citations.” We hypothesize these quantities computed for the 20 million papers on PubMed to differ within 5% of these percentages. Averaged across all 883 papers, 5 search terms are CV-D, and 10 search terms are CV-S, and 6 unique citations validate these searches. Potentially related literature uncovered by citation-validated searches (either CV-S or CV-D) are on the order of ten per paper – many more if the remaining searches that are not citation-validated are taken into account. The significance and relationship of each search result to the paper can only be vetted and explained by a researcher with knowledge of or interest in that paper. PMID:21935487

  5. Computer vision syndrome: A review.

    PubMed

    Gowrisankaran, Sowjanya; Sheedy, James E

    2015-01-01

    Computer vision syndrome (CVS) is a collection of symptoms related to prolonged work at a computer display. This article reviews the current knowledge about the symptoms, related factors and treatment modalities for CVS. Relevant literature on CVS published during the past 65 years was analyzed. Symptoms reported by computer users are classified into internal ocular symptoms (strain and ache), external ocular symptoms (dryness, irritation, burning), visual symptoms (blur, double vision) and musculoskeletal symptoms (neck and shoulder pain). The major factors associated with CVS are either environmental (improper lighting, display position and viewing distance) and/or dependent on the user's visual abilities (uncorrected refractive error, oculomotor disorders and tear film abnormalities). Although the factors associated with CVS have been identified the physiological mechanisms that underlie CVS are not completely understood. Additionally, advances in technology have led to the increased use of hand-held devices, which might impose somewhat different visual challenges compared to desktop displays. Further research is required to better understand the physiological mechanisms underlying CVS and symptoms associated with the use of hand-held and stereoscopic displays.

  6. Parallel Algorithms for Computer Vision

    DTIC Science & Technology

    1990-04-01

    NA86-1, Thinking Machines Corporation, Cambridge, MA, December 1986. [43] J. Little, G. Blelloch, and T. Cass. How to program the connection machine for... to program the connection machine for computer vision. In Proc. Workshop on Comp. Architecture for Pattern Analysis and Machine Intell., 1987. [92] J...In Proceedings of SPIE Conf. on Advances in Intelligent Robotics Systems, Bellingham, VA, 1987. SPIE. [91] J. Little, G. Blelloch, and T. Cass. How

  7. From Image Analysis to Computer Vision: Motives, Methods, and Milestones.

    DTIC Science & Technology

    1998-07-01

    images. Initially, work on digital image analysis dealt with specific classes of images such as text, photomicrographs, nuclear particle tracks, and aerial...photographs; but by the 1960’s, general algorithms and paradigms for image analysis began to be formulated. When the artificial intelligence...scene, but eventually from image sequences obtained by a moving camera; at this stage, image analysis had become scene analysis or computer vision

  8. Effects of job-related stress and burnout on asthenopia among high-tech workers.

    PubMed

    Ostrovsky, Anat; Ribak, Joseph; Pereg, Avihu; Gaton, Dan

    2012-01-01

    Eye- and vision-related symptoms are the most frequent health problems among computer users. The findings of eye strain, tired eyes, eye irritation, burning sensation, redness, blurred vision and double vision, when appearing together, have recently been termed 'computer vision syndrome', or asthenopia. To examine the frequency and intensity of asthenopia among individuals employed in research and development departments of high-tech firms and the effects of job stress and burnout on ocular complaints, this study included 106 subjects, 42 high-tech workers (study group) and 64 bank employees (control group). All participants completed self-report questionnaires covering demographics, asthenopia, satisfaction with work environmental conditions, job-related stress and burnout. There was a significant between-group difference in the intensity of asthenopia, but not in its frequency. Burnout appeared to be a significant contributing factor to the intensity and frequency of asthenopia. This study shows that burnout is a significant factor in asthenopic complaints in high-tech workers. This manuscript analyses the effects of psychological environmental factors, such as job stress and burnout, on ocular complaints at the workplace of computer users. The findings may have an ergonomic impact on how to improve health, safety and comfort of the working environment among computer users, for better perception of the job environment, efficacy and production.

  9. Integrating computation into the undergraduate curriculum: A vision and guidelines for future developments

    NASA Astrophysics Data System (ADS)

    Chonacky, Norman; Winch, David

    2008-04-01

    There is substantial evidence of a need to make computation an integral part of the undergraduate physics curriculum. This need is consistent with data from surveys in both the academy and the workplace, and has been reinforced by two years of exploratory efforts by a group of physics faculty for whom computation is a special interest. We have examined past and current efforts at reform and a variety of strategic, organizational, and institutional issues involved in any attempt to broadly transform existing practice. We propose a set of guidelines for development based on this past work and discuss our vision of computationally integrated physics.

  10. Computer Vision Syndrome and Associated Factors Among Medical and Engineering Students in Chennai

    PubMed Central

    Logaraj, M; Madhupriya, V; Hegde, SK

    2014-01-01

    Background: Almost all institutions, colleges, universities and homes today were using computer regularly. Very little research has been carried out on Indian users especially among college students the effects of computer use on the eye and vision related problems. Aim: The aim of this study was to assess the prevalence of computer vision syndrome (CVS) among medical and engineering students and the factors associated with the same. Subjects and Methods: A cross-sectional study was conducted among medical and engineering college students of a University situated in the suburban area of Chennai. Students who used computer in the month preceding the date of study were included in the study. The participants were surveyed using pre-tested structured questionnaire. Results: Among engineering students, the prevalence of CVS was found to be 81.9% (176/215) while among medical students; it was found to be 78.6% (158/201). A significantly higher proportion of engineering students 40.9% (88/215) used computers for 4-6 h/day as compared to medical students 10% (20/201) (P < 0.001). The reported symptoms of CVS were higher among engineering students compared with medical students. Students who used computer for 4-6 h were at significantly higher risk of developing redness (OR = 1.2, 95% CI = 1.0-3.1,P = 0.04), burning sensation (OR = 2.1,95% CI = 1.3-3.1, P < 0.01) and dry eyes (OR = 1.8, 95% CI = 1.1-2.9, P = 0.02) compared to those who used computer for less than 4 h. Significant correlation was found between increased hours of computer use and the symptoms redness, burning sensation, blurred vision and dry eyes. Conclusion: The present study revealed that more than three-fourth of the students complained of any one of the symptoms of CVS while working on the computer. PMID:24761234

  11. Computer vision syndrome and associated factors among medical and engineering students in chennai.

    PubMed

    Logaraj, M; Madhupriya, V; Hegde, Sk

    2014-03-01

    Almost all institutions, colleges, universities and homes today were using computer regularly. Very little research has been carried out on Indian users especially among college students the effects of computer use on the eye and vision related problems. The aim of this study was to assess the prevalence of computer vision syndrome (CVS) among medical and engineering students and the factors associated with the same. A cross-sectional study was conducted among medical and engineering college students of a University situated in the suburban area of Chennai. Students who used computer in the month preceding the date of study were included in the study. The participants were surveyed using pre-tested structured questionnaire. Among engineering students, the prevalence of CVS was found to be 81.9% (176/215) while among medical students; it was found to be 78.6% (158/201). A significantly higher proportion of engineering students 40.9% (88/215) used computers for 4-6 h/day as compared to medical students 10% (20/201) (P < 0.001). The reported symptoms of CVS were higher among engineering students compared with medical students. Students who used computer for 4-6 h were at significantly higher risk of developing redness (OR = 1.2, 95% CI = 1.0-3.1,P = 0.04), burning sensation (OR = 2.1,95% CI = 1.3-3.1, P < 0.01) and dry eyes (OR = 1.8, 95% CI = 1.1-2.9, P = 0.02) compared to those who used computer for less than 4 h. Significant correlation was found between increased hours of computer use and the symptoms redness, burning sensation, blurred vision and dry eyes. The present study revealed that more than three-fourth of the students complained of any one of the symptoms of CVS while working on the computer.

  12. Compact VLSI neural computer integrated with active pixel sensor for real-time ATR applications

    NASA Astrophysics Data System (ADS)

    Fang, Wai-Chi; Udomkesmalee, Gabriel; Alkalai, Leon

    1997-04-01

    A compact VLSI neural computer integrated with an active pixel sensor has been under development to mimic what is inherent in biological vision systems. This electronic eye- brain computer is targeted for real-time machine vision applications which require both high-bandwidth communication and high-performance computing for data sensing, synergy of multiple types of sensory information, feature extraction, target detection, target recognition, and control functions. The neural computer is based on a composite structure which combines Annealing Cellular Neural Network (ACNN) and Hierarchical Self-Organization Neural Network (HSONN). The ACNN architecture is a programmable and scalable multi- dimensional array of annealing neurons which are locally connected with their local neurons. Meanwhile, the HSONN adopts a hierarchical structure with nonlinear basis functions. The ACNN+HSONN neural computer is effectively designed to perform programmable functions for machine vision processing in all levels with its embedded host processor. It provides a two order-of-magnitude increase in computation power over the state-of-the-art microcomputer and DSP microelectronics. A compact current-mode VLSI design feasibility of the ACNN+HSONN neural computer is demonstrated by a 3D 16X8X9-cube neural processor chip design in a 2-micrometers CMOS technology. Integration of this neural computer as one slice of a 4'X4' multichip module into the 3D MCM based avionics architecture for NASA's New Millennium Program is also described.

  13. Another 25 Years of AIED? Challenges and Opportunities for Intelligent Educational Technologies of the Future

    ERIC Educational Resources Information Center

    Pinkwart, Niels

    2016-01-01

    This paper attempts an analysis of some current trends and future developments in computer science, education, and educational technology. Based on these trends, two possible future predictions of AIED are presented in the form of a utopian vision and a dystopian vision. A comparison of these two visions leads to seven challenges that AIED might…

  14. Merged Vision and GPS Control of a Semi-Autonomous, Small Helicopter

    NASA Technical Reports Server (NTRS)

    Rock, Stephen M.

    1999-01-01

    This final report documents the activities performed during the research period from April 1, 1996 to September 30, 1997. It contains three papers: Carrier Phase GPS and Computer Vision for Control of an Autonomous Helicopter; A Contestant in the 1997 International Aerospace Robotics Laboratory Stanford University; and Combined CDGPS and Vision-Based Control of a Small Autonomous Helicopter.

  15. Recent advances in the development and transfer of machine vision technologies for space

    NASA Technical Reports Server (NTRS)

    Defigueiredo, Rui J. P.; Pendleton, Thomas

    1991-01-01

    Recent work concerned with real-time machine vision is briefly reviewed. This work includes methodologies and techniques for optimal illumination, shape-from-shading of general (non-Lambertian) 3D surfaces, laser vision devices and technology, high level vision, sensor fusion, real-time computing, artificial neural network design and use, and motion estimation. Two new methods that are currently being developed for object recognition in clutter and for 3D attitude tracking based on line correspondence are discussed.

  16. Computing Critical Properties with Yang-Yang Anomalies

    NASA Astrophysics Data System (ADS)

    Orkoulas, Gerassimos; Cerdeirina, Claudio; Fisher, Michael

    2017-01-01

    Computation of the thermodynamics of fluids in the critical region is a challenging task owing to divergence of the correlation length and lack of particle-hole symmetries found in Ising or lattice-gas models. In addition, analysis of experiments and simulations reveals a Yang-Yang (YY) anomaly which entails sharing of the specific heat singularity between the pressure and the chemical potential. The size of the YY anomaly is measured by the YY ratio Rμ =C μ /CV of the amplitudes of C μ = - T d2 μ /dT2 and of the total specific heat CV. A ``complete scaling'' theory, in which the pressure mixes into the scaling fields, accounts for the YY anomaly. In Phys. Rev. Lett. 116, 040601 (2016), compressible cell gas (CCG) models which exhibit YY and singular diameter anomalies, have been advanced for near-critical fluids. In such models, the individual cell volumes are allowed to fluctuate. The thermodynamics of CCGs can be computed through mapping onto the Ising model via the seldom-used great grand canonical ensemble. The computations indicate that local free volume fluctuations are the origins of the YY effects. Furthermore, local energy-volume coupling (to model water) is another crucial factor underlying the phenomena.

  17. Effect of contact lens use on Computer Vision Syndrome.

    PubMed

    Tauste, Ana; Ronda, Elena; Molina, María-José; Seguí, Mar

    2016-03-01

    To analyse the relationship between Computer Vision Syndrome (CVS) in computer workers and contact lens use, according to lens materials. Cross-sectional study. The study included 426 civil-service office workers, of whom 22% were contact lens wearers. Workers completed the Computer Vision Syndrome Questionnaire (CVS-Q) and provided information on their contact lenses and exposure to video display terminals (VDT) at work. CVS was defined as a CVS-Q score of 6 or more. The covariates were age and sex. Logistic regression was used to calculate the association (crude and adjusted for age and sex) between CVS and individual and work-related factors, and between CVS and contact lens type. Contact lens wearers are more likely to suffer CVS than non-lens wearers, with a prevalence of 65% vs 50%. Workers who wear contact lenses and are exposed to the computer for more than 6 h day(-1) are more likely to suffer CVS than non-lens wearers working at the computer for the same amount of time (aOR = 4.85; 95% CI, 1.25-18.80; p = 0.02). Regular contact lens use increases CVS after 6 h of computer work. © 2016 The Authors Ophthalmic & Physiological Optics © 2016 The College of Optometrists.

  18. Computational gestalts and perception thresholds.

    PubMed

    Desolneux, Agnès; Moisan, Lionel; Morel, Jean-Michel

    2003-01-01

    In 1923, Max Wertheimer proposed a research programme and method in visual perception. He conjectured the existence of a small set of geometric grouping laws governing the perceptual synthesis of phenomenal objects, or "gestalt" from the atomic retina input. In this paper, we review this set of geometric grouping laws, using the works of Metzger, Kanizsa and their schools. In continuation, we explain why the Gestalt theory research programme can be translated into a Computer Vision programme. This translation is not straightforward, since Gestalt theory never addressed two fundamental matters: image sampling and image information measurements. Using these advances, we shall show that gestalt grouping laws can be translated into quantitative laws allowing the automatic computation of gestalts in digital images. From the psychophysical viewpoint, a main issue is raised: the computer vision gestalt detection methods deliver predictable perception thresholds. Thus, we are set in a position where we can build artificial images and check whether some kind of agreement can be found between the computationally predicted thresholds and the psychophysical ones. We describe and discuss two preliminary sets of experiments, where we compared the gestalt detection performance of several subjects with the predictable detection curve. In our opinion, the results of this experimental comparison support the idea of a much more systematic interaction between computational predictions in Computer Vision and psychophysical experiments.

  19. Vision-related problems among the workers engaged in jewellery manufacturing.

    PubMed

    Salve, Urmi Ravindra

    2015-01-01

    American Optometric Association defines Computer Vision Syndrome (CVS) as "complex of eye and vision problems related to near work which are experienced during or related to computer use." This happens when visual demand of the tasks exceeds the visual ability of the users. Even though problems were initially attributed to computer-related activities subsequently similar problems are also reported while carrying any near point task. Jewellery manufacturing activities involves precision designs, setting the tiny metals and stones which requires high visual attention and mental concentration and are often near point task. It is therefore expected that the workers engaged in jewellery manufacturing may also experience symptoms like CVS. Keeping the above in mind, this study was taken up (1) To identify the prevalence of symptoms like CVS among the workers of the jewellery manufacturing and compare the same with the workers working at computer workstation and (2) To ascertain whether such symptoms have any permanent vision-related problems. Case control study. The study was carried out in Zaveri Bazaar region and at an IT-enabled organization in Mumbai. The study involved the identification of symptoms of CVS using a questionnaire of Eye Strain Journal, opthalmological check-ups and measurement of Spontaneous Eye Blink rate. The data obtained from the jewellery manufacturing was compared with the data of the subjects engaged in computer work and with the data available in the literature. A comparative inferential statistics was used. Results showed that visual demands of the task carried out in jewellery manufacturing were much higher than that of carried out in computer-related work.

  20. Application of the SP theory of intelligence to the understanding of natural vision and the development of computer vision.

    PubMed

    Wolff, J Gerard

    2014-01-01

    The SP theory of intelligence aims to simplify and integrate concepts in computing and cognition, with information compression as a unifying theme. This article is about how the SP theory may, with advantage, be applied to the understanding of natural vision and the development of computer vision. Potential benefits include an overall simplification of concepts in a universal framework for knowledge and seamless integration of vision with other sensory modalities and other aspects of intelligence. Low level perceptual features such as edges or corners may be identified by the extraction of redundancy in uniform areas in the manner of the run-length encoding technique for information compression. The concept of multiple alignment in the SP theory may be applied to the recognition of objects, and to scene analysis, with a hierarchy of parts and sub-parts, at multiple levels of abstraction, and with family-resemblance or polythetic categories. The theory has potential for the unsupervised learning of visual objects and classes of objects, and suggests how coherent concepts may be derived from fragments. As in natural vision, both recognition and learning in the SP system are robust in the face of errors of omission, commission and substitution. The theory suggests how, via vision, we may piece together a knowledge of the three-dimensional structure of objects and of our environment, it provides an account of how we may see things that are not objectively present in an image, how we may recognise something despite variations in the size of its retinal image, and how raster graphics and vector graphics may be unified. And it has things to say about the phenomena of lightness constancy and colour constancy, the role of context in recognition, ambiguities in visual perception, and the integration of vision with other senses and other aspects of intelligence.

  1. Automatic human body modeling for vision-based motion capture system using B-spline parameterization of the silhouette

    NASA Astrophysics Data System (ADS)

    Jaume-i-Capó, Antoni; Varona, Javier; González-Hidalgo, Manuel; Mas, Ramon; Perales, Francisco J.

    2012-02-01

    Human motion capture has a wide variety of applications, and in vision-based motion capture systems a major issue is the human body model and its initialization. We present a computer vision algorithm for building a human body model skeleton in an automatic way. The algorithm is based on the analysis of the human shape. We decompose the body into its main parts by computing the curvature of a B-spline parameterization of the human contour. This algorithm has been applied in a context where the user is standing in front of a camera stereo pair. The process is completed after the user assumes a predefined initial posture so as to identify the main joints and construct the human model. Using this model, the initialization problem of a vision-based markerless motion capture system of the human body is solved.

  2. Understanding of and applications for robot vision guidance at KSC

    NASA Technical Reports Server (NTRS)

    Shawaga, Lawrence M.

    1988-01-01

    The primary thrust of robotics at KSC is for the servicing of Space Shuttle remote umbilical docking functions. In order for this to occur, robots performing servicing operations must be capable of tracking a swaying Orbiter in Six Degrees of Freedom (6-DOF). Currently, in NASA KSC's Robotic Applications Development Laboratory (RADL), an ASEA IRB-90 industrial robot is being equipped with a real-time computer vision (hardware and software) system to allow it to track a simulated Orbiter interface (target) in 6-DOF. The real-time computer vision system effectively becomes the eyes for the lab robot, guiding it through a closed loop visual feedback system to move with the simulated Orbiter interface. This paper will address an understanding of this vision guidance system and how it will be applied to remote umbilical servicing at KSC. In addition, other current and future applications will be addressed.

  3. How to differentiate collective variables in free energy codes: Computer-algebra code generation and automatic differentiation

    NASA Astrophysics Data System (ADS)

    Giorgino, Toni

    2018-07-01

    The proper choice of collective variables (CVs) is central to biased-sampling free energy reconstruction methods in molecular dynamics simulations. The PLUMED 2 library, for instance, provides several sophisticated CV choices, implemented in a C++ framework; however, developing new CVs is still time consuming due to the need to provide code for the analytical derivatives of all functions with respect to atomic coordinates. We present two solutions to this problem, namely (a) symbolic differentiation and code generation, and (b) automatic code differentiation, in both cases leveraging open-source libraries (SymPy and Stan Math, respectively). The two approaches are demonstrated and discussed in detail implementing a realistic example CV, the local radius of curvature of a polymer. Users may use the code as a template to streamline the implementation of their own CVs using high-level constructs and automatic gradient computation.

  4. Visual acuity, endothelial cell density and polymegathism after iris-fixated lens implantation.

    PubMed

    Nassiri, Nader; Ghorbanhosseini, Saeedeh; Jafarzadehpur, Ebrahim; Kavousnezhad, Sara; Nassiri, Nariman; Sheibani, Kourosh

    2018-01-01

    The purpose of this study was to evaluate the visual acuity as well as endothelial cell density (ECD) and polymegathism after iris-fixated lens (Artiflex ® AC 401) implantation for correction of moderate to high myopia. In this retrospective cross-sectional study, 55 eyes from 29 patients undergoing iris-fixated lens implantation for correction of myopia (-5.00 to -15.00 D) from 2007 to 2014 were evaluated. Uncorrected visual acuity, best spectacle-corrected visual acuity, refraction, ECD and polymegathism (coefficient of variation [CV] in the sizes of endothelial cells) were measured preoperatively and 6 months postoperatively. In the sixth month of follow-up, the uncorrected vision acuity was 20/25 or better in 81.5% of the eyes. The best-corrected visual acuity was 20/30 or better in 96.3% of the eyes, and more than 92% of the eyes had a refraction score of ±1 D from the target refraction. The mean corneal ECD of patients before surgery was 2,803±339 cells/mm 2 , which changed to 2,744±369 cells/mm 2 six months after surgery ( p =0.142). CV in the sizes of endothelial cells before the surgery was 25.7%±7.1% and six months after surgery it was 25.9%±5.4% ( p =0.857). Artiflex iris-fixated lens implantation is a suitable and predictable method for correction of moderate to high myopia. There was no statistically significant change in ECD and polymegathism (CV in the sizes of endothelial cells) after 6 months of follow-up.

  5. Survey of computer vision-based natural disaster warning systems

    NASA Astrophysics Data System (ADS)

    Ko, ByoungChul; Kwak, Sooyeong

    2012-07-01

    With the rapid development of information technology, natural disaster prevention is growing as a new research field dealing with surveillance systems. To forecast and prevent the damage caused by natural disasters, the development of systems to analyze natural disasters using remote sensing geographic information systems (GIS), and vision sensors has been receiving widespread interest over the last decade. This paper provides an up-to-date review of five different types of natural disasters and their corresponding warning systems using computer vision and pattern recognition techniques such as wildfire smoke and flame detection, water level detection for flood prevention, coastal zone monitoring, and landslide detection. Finally, we conclude with some thoughts about future research directions.

  6. Visual ergonomics in the workplace.

    PubMed

    Anshel, Jeffrey R

    2007-10-01

    This article provides information about visual function and its role in workplace productivity. By understanding the connection among comfort, health, and productivity and knowing the many options for effective ergonomic workplace lighting, the occupational health nurse can be sensitive to potential visual stress that can affect all areas of performance. Computer vision syndrome-the eye and vision problems associated with near work experienced during or related to computer use-is defined and solutions to it are discussed.

  7. A Feasibility Study of View-independent Gait Identification

    DTIC Science & Technology

    2012-03-01

    ice skates . For walking, the footprint records for single pixels form clusters that are well separated in space and time. (Any overlap of contact...Pattern Recognition 2007, 1-8. Cheng M-H, Ho M-F & Huang C-L (2008), "Gait Analysis for Human Identification Through Manifold Learning and HMM... Learning and Cybernetics 2005, 4516-4521 Moeslund T B & Granum E (2001), "A Survey of Computer Vision-Based Human Motion Capture", Computer Vision

  8. Observability/Identifiability of Rigid Motion under Perspective Projection

    DTIC Science & Technology

    1994-03-08

    Faugeras and S. Maybank . Motion from point mathces: multiplicity of solutions. Int. J, of Computer Vision, 1990. [16] D.B. Gennery. Tracking known...sequences. Int. 9. of computer vision, 1989. [37] S. Maybank . Theory of reconstruction from image motion. Springer Verlag, 1992. [38] Andrea 6...defined in section 5; in this appendix we show a simple characterization which is due to Faugeras and Maybank [15, 371. Theorem B.l . Let Q = UCVT

  9. Drogue pose estimation for unmanned aerial vehicle autonomous aerial refueling system based on infrared vision sensor

    NASA Astrophysics Data System (ADS)

    Chen, Shanjun; Duan, Haibin; Deng, Yimin; Li, Cong; Zhao, Guozhi; Xu, Yan

    2017-12-01

    Autonomous aerial refueling is a significant technology that can significantly extend the endurance of unmanned aerial vehicles. A reliable method that can accurately estimate the position and attitude of the probe relative to the drogue is the key to such a capability. A drogue pose estimation method based on infrared vision sensor is introduced with the general goal of yielding an accurate and reliable drogue state estimate. First, by employing direct least squares ellipse fitting and convex hull in OpenCV, a feature point matching and interference point elimination method is proposed. In addition, considering the conditions that some infrared LEDs are damaged or occluded, a missing point estimation method based on perspective transformation and affine transformation is designed. Finally, an accurate and robust pose estimation algorithm improved by the runner-root algorithm is proposed. The feasibility of the designed visual measurement system is demonstrated by flight test, and the results indicate that our proposed method enables precise and reliable pose estimation of the probe relative to the drogue, even in some poor conditions.

  10. Computer vision in roadway transportation systems: a survey

    NASA Astrophysics Data System (ADS)

    Loce, Robert P.; Bernal, Edgar A.; Wu, Wencheng; Bala, Raja

    2013-10-01

    There is a worldwide effort to apply 21st century intelligence to evolving our transportation networks. The goals of smart transportation networks are quite noble and manifold, including safety, efficiency, law enforcement, energy conservation, and emission reduction. Computer vision is playing a key role in this transportation evolution. Video imaging scientists are providing intelligent sensing and processing technologies for a wide variety of applications and services. There are many interesting technical challenges including imaging under a variety of environmental and illumination conditions, data overload, recognition and tracking of objects at high speed, distributed network sensing and processing, energy sources, as well as legal concerns. This paper presents a survey of computer vision techniques related to three key problems in the transportation domain: safety, efficiency, and security and law enforcement. A broad review of the literature is complemented by detailed treatment of a few selected algorithms and systems that the authors believe represent the state-of-the-art.

  11. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    PubMed Central

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-01-01

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images. PMID:28335510

  12. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction.

    PubMed

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-03-20

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  13. Computer vision for general purpose visual inspection: a fuzzy logic approach

    NASA Astrophysics Data System (ADS)

    Chen, Y. H.

    In automatic visual industrial inspection, computer vision systems have been widely used. Such systems are often application specific, and therefore require domain knowledge in order to have a successful implementation. Since visual inspection can be viewed as a decision making process, it is argued that the integration of fuzzy logic analysis and computer vision systems provides a practical approach to general purpose visual inspection applications. This paper describes the development of an integrated fuzzy-rule-based automatic visual inspection system. Domain knowledge about a particular application is represented as a set of fuzzy rules. From the status of predefined fuzzy variables, the set of fuzzy rules are defuzzified to give the inspection results. A practical application where IC marks (often in the forms of English characters and a company logo) inspection is demonstrated, which shows a more consistent result as compared to a conventional thresholding method.

  14. Comparing visual representations across human fMRI and computational vision

    PubMed Central

    Leeds, Daniel D.; Seibert, Darren A.; Pyles, John A.; Tarr, Michael J.

    2013-01-01

    Feedforward visual object perception recruits a cortical network that is assumed to be hierarchical, progressing from basic visual features to complete object representations. However, the nature of the intermediate features related to this transformation remains poorly understood. Here, we explore how well different computer vision recognition models account for neural object encoding across the human cortical visual pathway as measured using fMRI. These neural data, collected during the viewing of 60 images of real-world objects, were analyzed with a searchlight procedure as in Kriegeskorte, Goebel, and Bandettini (2006): Within each searchlight sphere, the obtained patterns of neural activity for all 60 objects were compared to model responses for each computer recognition algorithm using representational dissimilarity analysis (Kriegeskorte et al., 2008). Although each of the computer vision methods significantly accounted for some of the neural data, among the different models, the scale invariant feature transform (Lowe, 2004), encoding local visual properties gathered from “interest points,” was best able to accurately and consistently account for stimulus representations within the ventral pathway. More generally, when present, significance was observed in regions of the ventral-temporal cortex associated with intermediate-level object perception. Differences in model effectiveness and the neural location of significant matches may be attributable to the fact that each model implements a different featural basis for representing objects (e.g., more holistic or more parts-based). Overall, we conclude that well-known computer vision recognition systems may serve as viable proxies for theories of intermediate visual object representation. PMID:24273227

  15. Security Applications Of Computer Motion Detection

    NASA Astrophysics Data System (ADS)

    Bernat, Andrew P.; Nelan, Joseph; Riter, Stephen; Frankel, Harry

    1987-05-01

    An important area of application of computer vision is the detection of human motion in security systems. This paper describes the development of a computer vision system which can detect and track human movement across the international border between the United States and Mexico. Because of the wide range of environmental conditions, this application represents a stringent test of computer vision algorithms for motion detection and object identification. The desired output of this vision system is accurate, real-time locations for individual aliens and accurate statistical data as to the frequency of illegal border crossings. Because most detection and tracking routines assume rigid body motion, which is not characteristic of humans, new algorithms capable of reliable operation in our application are required. Furthermore, most current detection and tracking algorithms assume a uniform background against which motion is viewed - the urban environment along the US-Mexican border is anything but uniform. The system works in three stages: motion detection, object tracking and object identi-fication. We have implemented motion detection using simple frame differencing, maximum likelihood estimation, mean and median tests and are evaluating them for accuracy and computational efficiency. Due to the complex nature of the urban environment (background and foreground objects consisting of buildings, vegetation, vehicles, wind-blown debris, animals, etc.), motion detection alone is not sufficiently accurate. Object tracking and identification are handled by an expert system which takes shape, location and trajectory information as input and determines if the moving object is indeed representative of an illegal border crossing.

  16. Normative values for a tablet computer-based application to assess chromatic contrast sensitivity.

    PubMed

    Bodduluri, Lakshmi; Boon, Mei Ying; Ryan, Malcolm; Dain, Stephen J

    2018-04-01

    Tablet computer displays are amenable for the development of vision tests in a portable form. Assessing color vision using an easily accessible and portable test may help in the self-monitoring of vision-related changes in ocular/systemic conditions and assist in the early detection of disease processes. Tablet computer-based games were developed with different levels of gamification as a more portable option to assess chromatic contrast sensitivity. Game 1 was designed as a clinical version with no gaming elements. Game 2 was a gamified version of game 1 (added fun elements: feedback, scores, and sounds) and game 3 was a complete game with vision task nested within. The current study aimed to determine the normative values and evaluate repeatability of the tablet computer-based games in comparison with an established test, the Cambridge Colour Test (CCT) Trivector test. Normally sighted individuals [N = 100, median (range) age 19.0 years (18-56 years)] had their chromatic contrast sensitivity evaluated binocularly using the three games and the CCT. Games 1 and 2 and the CCT showed similar absolute thresholds and tolerance intervals, and game 3 had significantly lower values than games 1, 2, and the CCT, due to visual task differences. With the exception of game 3 for blue-yellow, the CCT and tablet computer-based games showed similar repeatability with comparable 95% limits of agreement. The custom-designed games are portable, rapid, and may find application in routine clinical practice, especially for testing younger populations.

  17. Machine Vision For Industrial Control:The Unsung Opportunity

    NASA Astrophysics Data System (ADS)

    Falkman, Gerald A.; Murray, Lawrence A.; Cooper, James E.

    1984-05-01

    Vision modules have primarily been developed to relieve those pressures newly brought into existence by Inspection (QUALITY) and Robotic (PRODUCTIVITY) mandates. Industrial Control pressure stems on the other hand from the older first industrial revolution mandate of throughput. Satisfying such pressure calls for speed in both imaging and decision making. Vision companies have, however, put speed on a backburner or ignore it entirely because most modules are computer/software based which limits their speed potential. Increasingly, the keynote being struck at machine vision seminars is that "Visual and Computational Speed Must Be Increased and Dramatically!" There are modular hardwired-logic systems that are fast but, all too often, they are not very bright. Such units: Measure the fill factor of bottles as they spin by, Read labels on cans, Count stacked plastic cups or Monitor the width of parts streaming past the camera. Many are only a bit more complex than a photodetector. Once in place, most of these units are incapable of simple upgrading to a new task and are Vision's analog to the robot industry's pick and place (RIA TYPE E) robot. Vision thus finds itself amidst the same quandries that once beset the Robot Industry of America when it tried to define a robot, excluded dumb ones, and was left with only slow machines whose unit volume potential is shatteringly low. This paper develops an approach to meeting the need of a vision system that cuts a swath into the terra incognita of intelligent, high-speed vision processing. Main attention is directed to vision for industrial control. Some presently untapped vision application areas that will be serviced include: Electronics, Food, Sports, Pharmaceuticals, Machine Tools and Arc Welding.

  18. 8760-Based Method for Representing Variable Generation Capacity Value in Capacity Expansion Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frew, Bethany A

    Capacity expansion models (CEMs) are widely used to evaluate the least-cost portfolio of electricity generators, transmission, and storage needed to reliably serve load over many years or decades. CEMs can be computationally complex and are often forced to estimate key parameters using simplified methods to achieve acceptable solve times or for other reasons. In this paper, we discuss one of these parameters -- capacity value (CV). We first provide a high-level motivation for and overview of CV. We next describe existing modeling simplifications and an alternate approach for estimating CV that utilizes hourly '8760' data of load and VG resources.more » We then apply this 8760 method to an established CEM, the National Renewable Energy Laboratory's (NREL's) Regional Energy Deployment System (ReEDS) model (Eurek et al. 2016). While this alternative approach for CV is not itself novel, it contributes to the broader CEM community by (1) demonstrating how a simplified 8760 hourly method, which can be easily implemented in other power sector models when data is available, more accurately captures CV trends than a statistical method within the ReEDS CEM, and (2) providing a flexible modeling framework from which other 8760-based system elements (e.g., demand response, storage, and transmission) can be added to further capture important dynamic interactions, such as curtailment.« less

  19. Aircraft cockpit vision: Math model

    NASA Technical Reports Server (NTRS)

    Bashir, J.; Singh, R. P.

    1975-01-01

    A mathematical model was developed to describe the field of vision of a pilot seated in an aircraft. Given the position and orientation of the aircraft, along with the geometrical configuration of its windows, and the location of an object, the model determines whether the object would be within the pilot's external vision envelope provided by the aircraft's windows. The computer program using this model was implemented and is described.

  20. Final Report for Geometric Observers and Particle Filtering for Controlled Active Vision

    DTIC Science & Technology

    2016-12-15

    code) 15-12-2016 Final Report 01Sep06 - 09May11 Final Report for Geometric Observers & Particle Filtering for Controlled Active Vision 49414-NS.1Allen...Observers and Particle Filtering for Controlled Active Vision by Allen R. Tannenbaum School of Electrical and Computer Engineering Georgia Institute of...7 2.2.4 Conformal Area Minimizing Flows . . . . . . . . . . . . . . . . . . . . . . . 8 2.3 Particle Filters

  1. CT Image Sequence Analysis for Object Recognition - A Rule-Based 3-D Computer Vision System

    Treesearch

    Dongping Zhu; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman

    1991-01-01

    Research is now underway to create a vision system for hardwood log inspection using a knowledge-based approach. In this paper, we present a rule-based, 3-D vision system for locating and identifying wood defects using topological, geometric, and statistical attributes. A number of different features can be derived from the 3-D input scenes. These features and evidence...

  2. Synthetic Vision Systems

    NASA Technical Reports Server (NTRS)

    Prinzel, L.J.; Kramer, L.J.

    2009-01-01

    A synthetic vision system is an aircraft cockpit display technology that presents the visual environment external to the aircraft using computer-generated imagery in a manner analogous to how it would appear to the pilot if forward visibility were not restricted. The purpose of this chapter is to review the state of synthetic vision systems, and discuss selected human factors issues that should be considered when designing such displays.

  3. The neuroscience of vision-based grasping: a functional review for computational modeling and bio-inspired robotics.

    PubMed

    Chinellato, Eris; Del Pobil, Angel P

    2009-06-01

    The topic of vision-based grasping is being widely studied in humans and in other primates using various techniques and with different goals. The fundamental related findings are reviewed in this paper, with the aim of providing researchers from different fields, including intelligent robotics and neural computation, a comprehensive but accessible view on the subject. A detailed description of the principal sensorimotor processes and the brain areas involved is provided following a functional perspective, in order to make this survey especially useful for computational modeling and bio-inspired robotic applications.

  4. Military Vision Research Program

    DTIC Science & Technology

    2011-07-01

    accomplishments emanating from this research . • 3 novel computer-based tasks have been developed that measure visual distortions • These tests are based...10-1-0392 TITLE: Military Vision Research Program PRINCIPAL INVESTIGATOR: Dr. Darlene Dartt...CONTRACTING ORGANIZATION: The Schepens Eye Research

  5. Smart vision chips: An overview

    NASA Technical Reports Server (NTRS)

    Koch, Christof

    1994-01-01

    This viewgraph presentation presents four working analog VLSI vision chips: (1) time-derivative retina, (2) zero-crossing chip, (3) resistive fuse, and (4) figure-ground chip; work in progress on computing motion and neuromorphic systems; and conceptual and practical lessons learned.

  6. Convolutional Deep Belief Networks for Single-Cell/Object Tracking in Computational Biology and Computer Vision.

    PubMed

    Zhong, Bineng; Pan, Shengnan; Zhang, Hongbo; Wang, Tian; Du, Jixiang; Chen, Duansheng; Cao, Liujuan

    2016-01-01

    In this paper, we propose deep architecture to dynamically learn the most discriminative features from data for both single-cell and object tracking in computational biology and computer vision. Firstly, the discriminative features are automatically learned via a convolutional deep belief network (CDBN). Secondly, we design a simple yet effective method to transfer features learned from CDBNs on the source tasks for generic purpose to the object tracking tasks using only limited amount of training data. Finally, to alleviate the tracker drifting problem caused by model updating, we jointly consider three different types of positive samples. Extensive experiments validate the robustness and effectiveness of the proposed method.

  7. Convolutional Deep Belief Networks for Single-Cell/Object Tracking in Computational Biology and Computer Vision

    PubMed Central

    Pan, Shengnan; Zhang, Hongbo; Wang, Tian; Du, Jixiang; Chen, Duansheng; Cao, Liujuan

    2016-01-01

    In this paper, we propose deep architecture to dynamically learn the most discriminative features from data for both single-cell and object tracking in computational biology and computer vision. Firstly, the discriminative features are automatically learned via a convolutional deep belief network (CDBN). Secondly, we design a simple yet effective method to transfer features learned from CDBNs on the source tasks for generic purpose to the object tracking tasks using only limited amount of training data. Finally, to alleviate the tracker drifting problem caused by model updating, we jointly consider three different types of positive samples. Extensive experiments validate the robustness and effectiveness of the proposed method. PMID:27847827

  8. Design of a sampling plan to detect ochratoxin A in green coffee.

    PubMed

    Vargas, E A; Whitaker, T B; Dos Santos, E A; Slate, A B; Lima, F B; Franca, R C A

    2006-01-01

    The establishment of maximum limits for ochratoxin A (OTA) in coffee by importing countries requires that coffee-producing countries develop scientifically based sampling plans to assess OTA contents in lots of green coffee before coffee enters the market thus reducing consumer exposure to OTA, minimizing the number of lots rejected, and reducing financial loss for producing countries. A study was carried out to design an official sampling plan to determine OTA in green coffee produced in Brazil. Twenty-five lots of green coffee (type 7 - approximately 160 defects) were sampled according to an experimental protocol where 16 test samples were taken from each lot (total of 16 kg) resulting in a total of 800 OTA analyses. The total, sampling, sample preparation, and analytical variances were 10.75 (CV = 65.6%), 7.80 (CV = 55.8%), 2.84 (CV = 33.7%), and 0.11 (CV = 6.6%), respectively, assuming a regulatory limit of 5 microg kg(-1) OTA and using a 1 kg sample, Romer RAS mill, 25 g sub-samples, and high performance liquid chromatography. The observed OTA distribution among the 16 OTA sample results was compared to several theoretical distributions. The 2 parameter-log normal distribution was selected to model OTA test results for green coffee as it gave the best fit across all 25 lot distributions. Specific computer software was developed using the variance and distribution information to predict the probability of accepting or rejecting coffee lots at specific OTA concentrations. The acceptation probability was used to compute an operating characteristic (OC) curve specific to a sampling plan design. The OC curve was used to predict the rejection of good lots (sellers' or exporters' risk) and the acceptance of bad lots (buyers' or importers' risk).

  9. A Coherent vorticity preserving eddy-viscosity correction for Large-Eddy Simulation

    NASA Astrophysics Data System (ADS)

    Chapelier, J.-B.; Wasistho, B.; Scalo, C.

    2018-04-01

    This paper introduces a new approach to Large-Eddy Simulation (LES) where subgrid-scale (SGS) dissipation is applied proportionally to the degree of local spectral broadening, hence mitigated or deactivated in regions dominated by large-scale and/or laminar vortical motion. The proposed coherent-vorticity preserving (CvP) LES methodology is based on the evaluation of the ratio of the test-filtered to resolved (or grid-filtered) enstrophy, σ. Values of σ close to 1 indicate low sub-test-filter turbulent activity, justifying local deactivation of the SGS dissipation. The intensity of the SGS dissipation is progressively increased for σ < 1 which corresponds to a small-scale spectral broadening. The SGS dissipation is then fully activated in developed turbulence characterized by σ ≤σeq, where the value σeq is derived assuming a Kolmogorov spectrum. The proposed approach can be applied to any eddy-viscosity model, is algorithmically simple and computationally inexpensive. LES of Taylor-Green vortex breakdown demonstrates that the CvP methodology improves the performance of traditional, non-dynamic dissipative SGS models, capturing the peak of total turbulent kinetic energy dissipation during transition. Similar accuracy is obtained by adopting Germano's dynamic procedure albeit at more than twice the computational overhead. A CvP-LES of a pair of unstable periodic helical vortices is shown to predict accurately the experimentally observed growth rate using coarse resolutions. The ability of the CvP methodology to dynamically sort the coherent, large-scale motion from the smaller, broadband scales during transition is demonstrated via flow visualizations. LES of compressible channel are carried out and show a good match with a reference DNS.

  10. The use of open and machine vision technologies for development of gesture recognition intelligent systems

    NASA Astrophysics Data System (ADS)

    Cherkasov, Kirill V.; Gavrilova, Irina V.; Chernova, Elena V.; Dokolin, Andrey S.

    2018-05-01

    The article is devoted to reflection of separate aspects of intellectual system gesture recognition development. The peculiarity of the system is its intellectual block which completely based on open technologies: OpenCV library and Microsoft Cognitive Toolkit (CNTK) platform. The article presents the rationale for the choice of such set of tools, as well as the functional scheme of the system and the hierarchy of its modules. Experiments have shown that the system correctly recognizes about 85% of images received from sensors. The authors assume that the improvement of the algorithmic block of the system will increase the accuracy of gesture recognition up to 95%.

  11. Knowledge-based machine vision systems for space station automation

    NASA Technical Reports Server (NTRS)

    Ranganath, Heggere S.; Chipman, Laure J.

    1989-01-01

    Computer vision techniques which have the potential for use on the space station and related applications are assessed. A knowledge-based vision system (expert vision system) and the development of a demonstration system for it are described. This system implements some of the capabilities that would be necessary in a machine vision system for the robot arm of the laboratory module in the space station. A Perceptics 9200e image processor, on a host VAXstation, was used to develop the demonstration system. In order to use realistic test images, photographs of actual space shuttle simulator panels were used. The system's capabilities of scene identification and scene matching are discussed.

  12. Application of high performance computing for studying cyclic variability in dilute internal combustion engines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    FINNEY, Charles E A; Edwards, Kevin Dean; Stoyanov, Miroslav K

    2015-01-01

    Combustion instabilities in dilute internal combustion engines are manifest in cyclic variability (CV) in engine performance measures such as integrated heat release or shaft work. Understanding the factors leading to CV is important in model-based control, especially with high dilution where experimental studies have demonstrated that deterministic effects can become more prominent. Observation of enough consecutive engine cycles for significant statistical analysis is standard in experimental studies but is largely wanting in numerical simulations because of the computational time required to compute hundreds or thousands of consecutive cycles. We have proposed and begun implementation of an alternative approach to allowmore » rapid simulation of long series of engine dynamics based on a low-dimensional mapping of ensembles of single-cycle simulations which map input parameters to output engine performance. This paper details the use Titan at the Oak Ridge Leadership Computing Facility to investigate CV in a gasoline direct-injected spark-ignited engine with a moderately high rate of dilution achieved through external exhaust gas recirculation. The CONVERGE CFD software was used to perform single-cycle simulations with imposed variations of operating parameters and boundary conditions selected according to a sparse grid sampling of the parameter space. Using an uncertainty quantification technique, the sampling scheme is chosen similar to a design of experiments grid but uses functions designed to minimize the number of samples required to achieve a desired degree of accuracy. The simulations map input parameters to output metrics of engine performance for a single cycle, and by mapping over a large parameter space, results can be interpolated from within that space. This interpolation scheme forms the basis for a low-dimensional metamodel which can be used to mimic the dynamical behavior of corresponding high-dimensional simulations. Simulations of high-EGR spark-ignition combustion cycles within a parametric sampling grid were performed and analyzed statistically, and sensitivities of the physical factors leading to high CV are presented. With these results, the prospect of producing low-dimensional metamodels to describe engine dynamics at any point in the parameter space will be discussed. Additionally, modifications to the methodology to account for nondeterministic effects in the numerical solution environment are proposed« less

  13. Specification and Analysis of Parallel Machine Architecture

    DTIC Science & Technology

    1990-03-17

    Parallel Machine Architeture C.V. Ramamoorthy Computer Science Division Dept. of Electrical Engineering and Computer Science University of California...capacity. (4) Adaptive: The overhead in resolution of deadlocks, etc. should be in proportion to their frequency. (5) Avoid rollbacks: Rollbacks can be...snapshots of system state graphically at a rate proportional to simulation time. Some of the examples are as follow: (1) When the simulation clock of

  14. Four Frames Suffice. A Provisionary Model of Vision and Space,

    DTIC Science & Technology

    1982-09-01

    0 * / Justifi ati AvailabilitY Codes 1. Introduction This paper is an attempt to specify’ a computationally and scientifically plausible model of how...abstract neural compuiting unit and a variety of construtions built of these units and their properties. All of this is part of the connectionist...chosen are inlerided to elucidate the nia’or scientific problems in intermediate level vision and would not be the best choice or a practical computer

  15. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems

    PubMed Central

    Ehsan, Shoaib; Clark, Adrian F.; ur Rehman, Naveed; McDonald-Maier, Klaus D.

    2015-01-01

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems. PMID:26184211

  16. Identifying local structural states in atomic imaging by computer vision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laanait, Nouamane; Ziatdinov, Maxim; He, Qian

    The availability of atomically resolved imaging modalities enables an unprecedented view into the local structural states of materials, which manifest themselves by deviations from the fundamental assumptions of periodicity and symmetry. Consequently, approaches that aim to extract these local structural states from atomic imaging data with minimal assumptions regarding the average crystallographic configuration of a material are indispensable to advances in structural and chemical investigations of materials. Here, we present an approach to identify and classify local structural states that is rooted in computer vision. This approach introduces a definition of a structural state that is composed of both localmore » and non-local information extracted from atomically resolved images, and is wholly untethered from the familiar concepts of symmetry and periodicity. Instead, this approach relies on computer vision techniques such as feature detection, and concepts such as scale-invariance. We present the fundamental aspects of local structural state extraction and classification by application to simulated scanning transmission electron microscopy images, and analyze the robustness of this approach in the presence of common instrumental factors such as noise, limited spatial resolution, and weak contrast. Finally, we apply this computer vision-based approach for the unsupervised detection and classification of local structural states in an experimental electron micrograph of a complex oxides interface, and a scanning tunneling micrograph of a defect engineered multilayer graphene surface.« less

  17. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems.

    PubMed

    Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D

    2015-07-10

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  18. Identifying local structural states in atomic imaging by computer vision

    DOE PAGES

    Laanait, Nouamane; Ziatdinov, Maxim; He, Qian; ...

    2016-11-02

    The availability of atomically resolved imaging modalities enables an unprecedented view into the local structural states of materials, which manifest themselves by deviations from the fundamental assumptions of periodicity and symmetry. Consequently, approaches that aim to extract these local structural states from atomic imaging data with minimal assumptions regarding the average crystallographic configuration of a material are indispensable to advances in structural and chemical investigations of materials. Here, we present an approach to identify and classify local structural states that is rooted in computer vision. This approach introduces a definition of a structural state that is composed of both localmore » and non-local information extracted from atomically resolved images, and is wholly untethered from the familiar concepts of symmetry and periodicity. Instead, this approach relies on computer vision techniques such as feature detection, and concepts such as scale-invariance. We present the fundamental aspects of local structural state extraction and classification by application to simulated scanning transmission electron microscopy images, and analyze the robustness of this approach in the presence of common instrumental factors such as noise, limited spatial resolution, and weak contrast. Finally, we apply this computer vision-based approach for the unsupervised detection and classification of local structural states in an experimental electron micrograph of a complex oxides interface, and a scanning tunneling micrograph of a defect engineered multilayer graphene surface.« less

  19. Robotic space simulation integration of vision algorithms into an orbital operations simulation

    NASA Technical Reports Server (NTRS)

    Bochsler, Daniel C.

    1987-01-01

    In order to successfully plan and analyze future space activities, computer-based simulations of activities in low earth orbit will be required to model and integrate vision and robotic operations with vehicle dynamics and proximity operations procedures. The orbital operations simulation (OOS) is configured and enhanced as a testbed for robotic space operations. Vision integration algorithms are being developed in three areas: preprocessing, recognition, and attitude/attitude rates. The vision program (Rice University) was modified for use in the OOS. Systems integration testing is now in progress.

  20. Comparison of tests of accommodation for computer users.

    PubMed

    Kolker, David; Hutchinson, Robert; Nilsen, Erik

    2002-04-01

    With the increased use of computers in the workplace and at home, optometrists are finding more patients presenting with symptoms of Computer Vision Syndrome. Among these symptomatic individuals, research supports that accommodative disorders are the most common vision finding. A prepresbyopic group (N= 30) and a presbyopic group (N = 30) were selected from a private practice. Assignment to a group was determined by age, accommodative amplitude, and near visual acuity with their distance prescription. Each subject was given a thorough vision and ocular health examination, then administered several nearpoint tests of accommodation at a computer working distance. All the tests produced similar results in the presbyopic group. For the prepresbyopic group, the tests yielded very different results. To effectively treat symptomatic VDT users, optometrists must assess the accommodative system along with the binocular and refractive status. For presbyopic patients, all nearpoint tests studied will yield virtually the same result. However, the method of testing accommodation, as well as the test stimulus presented, will yield significantly different responses for prepresbyopic patients. Previous research indicates that a majority of patients prefer the higher plus prescription yielded by the Gaussian image test.

  1. Vision 20/20: Automation and advanced computing in clinical radiation oncology.

    PubMed

    Moore, Kevin L; Kagadis, George C; McNutt, Todd R; Moiseenko, Vitali; Mutic, Sasa

    2014-01-01

    This Vision 20/20 paper considers what computational advances are likely to be implemented in clinical radiation oncology in the coming years and how the adoption of these changes might alter the practice of radiotherapy. Four main areas of likely advancement are explored: cloud computing, aggregate data analyses, parallel computation, and automation. As these developments promise both new opportunities and new risks to clinicians and patients alike, the potential benefits are weighed against the hazards associated with each advance, with special considerations regarding patient safety under new computational platforms and methodologies. While the concerns of patient safety are legitimate, the authors contend that progress toward next-generation clinical informatics systems will bring about extremely valuable developments in quality improvement initiatives, clinical efficiency, outcomes analyses, data sharing, and adaptive radiotherapy.

  2. An architecture for real-time vision processing

    NASA Technical Reports Server (NTRS)

    Chien, Chiun-Hong

    1994-01-01

    To study the feasibility of developing an architecture for real time vision processing, a task queue server and parallel algorithms for two vision operations were designed and implemented on an i860-based Mercury Computing System 860VS array processor. The proposed architecture treats each vision function as a task or set of tasks which may be recursively divided into subtasks and processed by multiple processors coordinated by a task queue server accessible by all processors. Each idle processor subsequently fetches a task and associated data from the task queue server for processing and posts the result to shared memory for later use. Load balancing can be carried out within the processing system without the requirement for a centralized controller. The author concludes that real time vision processing cannot be achieved without both sequential and parallel vision algorithms and a good parallel vision architecture.

  3. Data Fusion for a Vision-Radiological System: a Statistical Calibration Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Enqvist, Andreas; Koppal, Sanjeev; Riley, Phillip

    2015-07-01

    Presented here is a fusion system based on simple, low-cost computer vision and radiological sensors for tracking of multiple objects and identifying potential radiological materials being transported or shipped. The main focus of this work is the development of calibration algorithms for characterizing the fused sensor system as a single entity. There is an apparent need for correcting for a scene deviation from the basic inverse distance-squared law governing the detection rates even when evaluating system calibration algorithms. In particular, the computer vision system enables a map of distance-dependence of the sources being tracked, to which the time-dependent radiological datamore » can be incorporated by means of data fusion of the two sensors' output data. (authors)« less

  4. System of error detection in the manufacture of garments using artificial vision

    NASA Astrophysics Data System (ADS)

    Moreno, J. J.; Aguila, A.; Partida, E.; Martinez, C. L.; Morales, O.; Tejeida, R.

    2017-12-01

    A computer vision system is implemented to detect errors in the cutting stage within the manufacturing process of garments in the textile industry. It provides solution to errors within the process that cannot be easily detected by any employee, in addition to significantly increase the speed of quality review. In the textile industry as in many others, quality control is required in manufactured products and this has been carried out manually by means of visual inspection by employees over the years. For this reason, the objective of this project is to design a quality control system using computer vision to identify errors in the cutting stage within the garment manufacturing process to increase the productivity of textile processes by reducing costs.

  5. Variation in nutrients formulated and nutrients supplied on 5 California dairies.

    PubMed

    Rossow, H A; Aly, S S

    2013-01-01

    Computer models used in ration formulation assume that nutrients supplied by a ration formulation are the same as the nutrients presented in front of the cow in the final ration. Deviations in nutrients due to feed management effects such as dry matter changes (i.e., rain), loading, mixing, and delivery errors are assumed to not affect delivery of nutrients to the cow and her resulting milk production. To estimate how feed management affects nutrients supplied to the cow and milk production, and determine if nutrients can serve as indexes of feed management practices, weekly total mixed ration samples were collected and analyzed for 4 pens (close-up cows, fresh cows, high-milk-producing, and low-milk-producing cows, if available) for 7 to 12 wk on 5 commercial California dairies. Differences among nutrient analyses from these samples and nutrients from the formulated rations were analyzed by PROC MIXED of SAS (SAS Institute Inc., Cary, NC). Milk fat and milk protein percentages did not vary as much [coefficient of variation (CV) = 18 to 33%] as milk yield (kg; CV = 16 to 47 %) across all dairies and pens. Variability in nutrients delivered were highest for macronutrient fat (CV = 22%), lignin (CV = 15%), and ash (CV = 11%) percentages and micronutrients Fe (mg/kg; CV = 48%), Na (%; CV = 42%), and Zn (mg/kg; CV = 38%) for the milking pens across all dairies. Partitioning of the variability in random effects of nutrients delivered and intraclass correlation coefficients showed that variability in lignin percentage of TMR had the highest correlation with variability in milk yield and milk fat percentage, followed by fat and crude protein percentages. But, variability in ash, fat, and lignin percentages of total mixed ration had the highest correlation with variability in milk protein percentage. Therefore, lignin, fat, and ash may be the best indices of feed management to include effects of variability in nutrients on variability in milk yield, milk fat, and milk protein percentages in ration formulation models. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  6. A Logical Basis In The Layered Computer Vision Systems Model

    NASA Astrophysics Data System (ADS)

    Tejwani, Y. J.

    1986-03-01

    In this paper a four layer computer vision system model is described. The model uses a finite memory scratch pad. In this model planar objects are defined as predicates. Predicates are relations on a k-tuple. The k-tuple consists of primitive points and relationship between primitive points. The relationship between points can be of the direct type or the indirect type. Entities are goals which are satisfied by a set of clauses. The grammar used to construct these clauses is examined.

  7. Bio-Inspired Sensing and Imaging of Polarization Information in Nature

    DTIC Science & Technology

    2008-05-04

    polarization imaging,” Appl. Opt. 36, 150–155 (1997). 5. L. B. Wolff, “Polarization camera for computer vision with a beam splitter ,” J. Opt. Soc. Am. A...vision with a beam splitter ,” J. Opt. Soc. Am. A 11, 2935–2945 (1994). 2. L. B. Wolff and A. G. Andreou, “Polarization camera sensors,” Image Vis. Comput...group we have been developing various man-made, non -invasive imaging methodologies, sensing schemes, camera systems, and visualization and display

  8. Feasibility Study and Cost Benefit Analysis of Thin-Client Computer System Implementation Onboard United States Navy Ships

    DTIC Science & Technology

    2007-06-01

    management issues he encountered ruled out the Expanion as a viable option for thin-client computing in the Navy. An improvement in thin-client...44 Requirements to capabilities (2004). Retrieved April 29, 2007, from Vision Presence Power: A Program Guide to the U.S. Navy – 2004...Retrieved April 29, 2007, from Vision Presence Power: A Program Guide to the U.S. Navy – 2004 Edition, p. 128. Web site: http://www.chinfo.navy.mil

  9. A computer architecture for intelligent machines

    NASA Technical Reports Server (NTRS)

    Lefebvre, D. R.; Saridis, G. N.

    1992-01-01

    The theory of intelligent machines proposes a hierarchical organization for the functions of an autonomous robot based on the principle of increasing precision with decreasing intelligence. An analytic formulation of this theory using information-theoretic measures of uncertainty for each level of the intelligent machine has been developed. The authors present a computer architecture that implements the lower two levels of the intelligent machine. The architecture supports an event-driven programming paradigm that is independent of the underlying computer architecture and operating system. Execution-level controllers for motion and vision systems are briefly addressed, as well as the Petri net transducer software used to implement coordination-level functions. A case study illustrates how this computer architecture integrates real-time and higher-level control of manipulator and vision systems.

  10. Robot computer problem solving system

    NASA Technical Reports Server (NTRS)

    Merriam, E. W.; Becker, J. D.

    1973-01-01

    A robot computer problem solving system which represents a robot exploration vehicle in a simulated Mars environment is described. The model exhibits changes and improvements made on a previously designed robot in a city environment. The Martian environment is modeled in Cartesian coordinates; objects are scattered about a plane; arbitrary restrictions on the robot's vision have been removed; and the robot's path contains arbitrary curves. New environmental features, particularly the visual occlusion of objects by other objects, were added to the model. Two different algorithms were developed for computing occlusion. Movement and vision capabilities of the robot were established in the Mars environment, using LISP/FORTRAN interface for computational efficiency. The graphical display program was redesigned to reflect the change to the Mars-like environment.

  11. Method of mobile robot indoor navigation by artificial landmarks with use of computer vision

    NASA Astrophysics Data System (ADS)

    Glibin, E. S.; Shevtsov, A. A.; Enik, O. A.

    2018-05-01

    The article describes an algorithm of the mobile robot indoor navigation based on the use of visual odometry. The results of the experiment identifying calculation errors in the distance traveled on a slip are presented. It is shown that the use of computer vision allows one to correct erroneous coordinates of the robot with the help of artificial landmarks. The control system utilizing the proposed method has been realized on the basis of Arduino Mego 2560 controller and a single-board computer Raspberry Pi 3. The results of the experiment on the mobile robot navigation with the use of this control system are presented.

  12. Feedback Signal from Motoneurons Influences a Rhythmic Pattern Generator.

    PubMed

    Rotstein, Horacio G; Schneider, Elisa; Szczupak, Lidia

    2017-09-20

    Motoneurons are not mere output units of neuronal circuits that control motor behavior but participate in pattern generation. Research on the circuit that controls the crawling motor behavior in leeches indicated that motoneurons participate as modulators of this rhythmic motor pattern. Crawling results from successive bouts of elongation and contraction of the whole leech body. In the isolated segmental ganglia, dopamine can induce a rhythmic antiphasic activity of the motoneurons that control contraction (DE-3 motoneurons) and elongation (CV motoneurons). The study was performed in isolated ganglia where manipulation of the activity of specific motoneurons was performed in the course of fictive crawling ( crawling ). In this study, the membrane potential of CV was manipulated while crawling was monitored through the rhythmic activity of DE-3. Matching behavioral observations that show that elongation dominates the rhythmic pattern, the electrophysiological activity of CV motoneurons dominates the cycle. Brief excitation of CV motoneurons during crawling episodes resets the rhythmic activity of DE-3, indicating that CV feeds back to the rhythmic pattern generator. CV hyperpolarization accelerated the rhythm to an extent that depended on the magnitude of the cycle period, suggesting that CV exerted a positive feedback on the unit(s) of the pattern generator that controls the elongation phase. A simple computational model was implemented to test the consequences of such feedback. The simulations indicate that the duty cycle of CV depended on the strength of the positive feedback between CV and the pattern generator circuit. SIGNIFICANCE STATEMENT Rhythmic movements of animals are controlled by neuronal networks that have been conceived as hierarchical structures. At the basis of this hierarchy, we find the motoneurons, few neurons at the top control global aspects of the behavior (e.g., onset, duration); and within these two ends, specific neuronal circuits control the actual rhythmic pattern of movements. We have investigated whether motoneurons are limited to function as output units. Analysis of the network that controls crawling behavior in the leech has clearly indicated that motoneurons, in addition to controlling muscle activity, send signals to the pattern generator. Physiological and modeling studies on the role of specific motoneurons suggest that these feedback signals modulate the phase relationship of the rhythmic activity. Copyright © 2017 the authors 0270-6474/17/379149-11$15.00/0.

  13. The Interdependence of Computers, Robots, and People.

    ERIC Educational Resources Information Center

    Ludden, Laverne; And Others

    Computers and robots are becoming increasingly more advanced, with smaller and cheaper computers now doing jobs once reserved for huge multimillion dollar computers and with robots performing feats such as painting cars and using television cameras to simulate vision as they perform factory tasks. Technicians expect computers to become even more…

  14. Development of a model of machine hand eye coordination and program specifications for a topological machine vision system

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A unified approach to computer vision and manipulation is developed which is called choreographic vision. In the model, objects to be viewed by a projected robot in the Viking missions to Mars are seen as objects to be manipulated within choreographic contexts controlled by a multimoded remote, supervisory control system on Earth. A new theory of context relations is introduced as a basis for choreographic programming languages. A topological vision model is developed for recognizing objects by shape and contour. This model is integrated with a projected vision system consisting of a multiaperture image dissector TV camera and a ranging laser system. System program specifications integrate eye-hand coordination and topological vision functions and an aerospace multiprocessor implementation is described.

  15. Reading Digital with Low Vision

    PubMed Central

    Legge, Gordon E.

    2017-01-01

    Reading difficulty is a major consequence of vision loss for more than four million Americans with low vision. Difficulty in accessing print imposes obstacles to education, employment, social interaction and recreation. In recent years, research in vision science has made major strides in understanding the impact of low vision on reading, and the dependence of reading performance on text properties. The ongoing transition to the production and distribution of digital documents brings about new opportunities for people with visual impairment. Digital documents on computers and mobile devices permit customization of print size, spacing, font style, contrast polarity and page layout to optimize reading displays for people with low vision. As a result, we now have unprecedented opportunities to adapt text format to meet the needs of visually impaired readers. PMID:29242668

  16. Inter-operator Reliability of Magnetic Resonance Image-Based Computational Fluid Dynamics Prediction of Cerebrospinal Fluid Motion in the Cervical Spine.

    PubMed

    Martin, Bryn A; Yiallourou, Theresia I; Pahlavian, Soroush Heidari; Thyagaraj, Suraj; Bunck, Alexander C; Loth, Francis; Sheffer, Daniel B; Kröger, Jan Robert; Stergiopulos, Nikolaos

    2016-05-01

    For the first time, inter-operator dependence of MRI based computational fluid dynamics (CFD) modeling of cerebrospinal fluid (CSF) in the cervical spinal subarachnoid space (SSS) is evaluated. In vivo MRI flow measurements and anatomy MRI images were obtained at the cervico-medullary junction of a healthy subject and a Chiari I malformation patient. 3D anatomies of the SSS were reconstructed by manual segmentation by four independent operators for both cases. CFD results were compared at nine axial locations along the SSS in terms of hydrodynamic and geometric parameters. Intraclass correlation (ICC) assessed the inter-operator agreement for each parameter over the axial locations and coefficient of variance (CV) compared the percentage of variance for each parameter between the operators. Greater operator dependence was found for the patient (0.19 < ICC < 0.99) near the craniovertebral junction compared to the healthy subject (ICC > 0.78). For the healthy subject, hydraulic diameter and Womersley number had the least variance (CV = ~2%). For the patient, peak diastolic velocity and Reynolds number had the smallest variance (CV = ~3%). These results show a high degree of inter-operator reliability for MRI-based CFD simulations of CSF flow in the cervical spine for healthy subjects and a lower degree of reliability for patients with Type I Chiari malformation.

  17. Inter-Operator Dependence of Magnetic Resonance Image-Based Computational Fluid Dynamics Prediction of Cerebrospinal Fluid Motion in the Cervical Spine

    PubMed Central

    Martin, Bryn A.; Yiallourou, Theresia I.; Pahlavian, Soroush Heidari; Thyagaraj, Suraj; Bunck, Alexander C.; Loth, Francis; Sheffer, Daniel B.; Kröger, Jan Robert; Stergiopulos, Nikolaos

    2015-01-01

    For the first time, inter-operator dependence of MRI based computational fluid dynamics (CFD) modeling of cerebrospinal fluid (CSF) in the cervical spinal subarachnoid space (SSS) is evaluated. In vivo MRI flow measurements and anatomy MRI images were obtained at the cervico-medullary junction of a healthy subject and a Chiari I malformation patient. 3D anatomies of the SSS were reconstructed by manual segmentation by four independent operators for both cases. CFD results were compared at nine axial locations along the SSS in terms of hydrodynamic and geometric parameters. Intraclass correlation (ICC) assessed the inter-operator agreement for each parameter over the axial locations and coefficient of variance (CV) compared the percentage of variance for each parameter between the operators. Greater operator dependence was found for the patient (0.19 0.78). For the healthy subject, hydraulic diameter and Womersley number had the least variance (CV= ~2%). For the patient, peak diastolic velocity and Reynolds number had the smallest variance (CV= ~3%). These results show a high degree of inter-operator reliability for MRI-based CFD simulations of CSF flow in the cervical spine for healthy subjects and a lower degree of reliability for patients with Type I Chiari malformation. PMID:26446009

  18. Optimization of myocardial deformation imaging in term and preterm infants.

    PubMed

    Poon, Chuen Y; Edwards, Julie M; Joshi, Suchita; Kotecha, Sailesh; Fraser, Alan G

    2011-03-01

    Myocardial deformation imaging is now used to assess regional ventricular function in infants but their small size presents particular technical challenges. We therefore investigated the determinants of reproducibility of myocardial longitudinal strain (ε) in term and preterm infants, in order to determine optimal technical settings. Repeated longitudinal ε measurements of the mid-segments of the septum, and the left and right ventricular free walls, were performed using five different computation distances (CDs; also called strain length) in 20 infants. The coefficients of variation (CV) were calculated for each CD. Overall, ε measurements were most reproducible with a CD of 6 mm (CV 11.7%). In preterm infants (<34 weeks gestation; mean ± SD diastolic LV length, 20.3 ± 3.5 mm), ε measurements were most reproducible with CD of 6 mm (CV 7.2%); in term infants (>37 weeks gestation; mean ± SD diastolic LV length, 29.6 ± 3.0 mm), ε measurements were most reproducible with CD of 10 mm (CV 13.2%). The reproducibility of measuring ε increased with higher frame rates, from CV of 17.3% at frame rates <180 per s to 11.7% for frame rates >180 per s and 9.6% for rates >248 per s. In newborn infants, tissue Doppler loops should be acquired at frame rates above 180 per s. Myocardial deformation analysis of preterm infants should be performed using a CD of 6 mm, whereas a CD of 10 mm is more reproducible in term infants.

  19. Real-time heart rate measurement for multi-people using compressive tracking

    NASA Astrophysics Data System (ADS)

    Liu, Lingling; Zhao, Yuejin; Liu, Ming; Kong, Lingqin; Dong, Liquan; Ma, Feilong; Pang, Zongguang; Cai, Zhi; Zhang, Yachu; Hua, Peng; Yuan, Ruifeng

    2017-09-01

    The rise of aging population has created a demand for inexpensive, unobtrusive, automated health care solutions. Image PhotoPlethysmoGraphy(IPPG) aids in the development of these solutions by allowing for the extraction of physiological signals from video data. However, the main deficiencies of the recent IPPG methods are non-automated, non-real-time and susceptible to motion artifacts(MA). In this paper, a real-time heart rate(HR) detection method for multiple subjects simultaneously was proposed and realized using the open computer vision(openCV) library, which consists of getting multiple subjects' facial video automatically through a Webcam, detecting the region of interest (ROI) in the video, reducing the false detection rate by our improved Adaboost algorithm, reducing the MA by our improved compress tracking(CT) algorithm, wavelet noise-suppression algorithm for denoising and multi-threads for higher detection speed. For comparison, HR was measured simultaneously using a medical pulse oximetry device for every subject during all sessions. Experimental results on a data set of 30 subjects show that the max average absolute error of heart rate estimation is less than 8 beats per minute (BPM), and the processing speed of every frame has almost reached real-time: the experiments with video recordings of ten subjects under the condition of the pixel resolution of 600× 800 pixels show that the average HR detection time of 10 subjects was about 17 frames per second (fps).

  20. Intraoperative Optical Coherence Tomography Using the RESCAN 700: Preliminary Results in Collagen Crosslinking.

    PubMed

    Pahuja, Natasha; Shetty, Rohit; Jayadev, Chaitra; Nuijts, Rudy; Hedge, Bharath; Arora, Vishal

    2015-01-01

    To compare the penetration of riboflavin using a microscope-integrated real time spectral domain optical coherence tomography (ZEISS OPMI LUMERA 700 and ZEISS RESCAN 700) in keratoconus patients undergoing accelerated collagen crosslinking (ACXL) between epithelium on (epi-on) and epithelium off (epi-off). Intraoperative images were obtained during each of the procedures. Seven keratoconus patients underwent epi-on ACXL and four underwent epi-off ACXL. A software tool was developed using Microsoft.NET and Open Computer Vision (OpenCV) libraries for image analysis. Pre- and postprocedure images were analyzed for changes in the corneal hyperreflectance pattern as a measure of the depth of riboflavin penetration. The mean corneal hyperreflectance in the epi-on group was 12.97 ± 1.49 gray scale units (GSU) before instillation of riboflavin and 14.46 ± 2.09 GSU after AXCL (P = 0.019) while in the epi-off group it was 11.43 ± 2.68 GSU and 16.98 ± 8.49 GSU, respectively (P = 0.002). The average depth of the band of hyperreflectance in the epi-on group was 149.39 ± 15.63 microns and in the epi-off group it was 191.04 ± 32.18 microns. This novel in vivo, real time imaging study demonstrates riboflavin penetration during epi-on and epi-off ACXL.

  1. Automated Grading System for Evaluation of Superficial Punctate Keratitis Associated With Dry Eye.

    PubMed

    Rodriguez, John D; Lane, Keith J; Ousler, George W; Angjeli, Endri; Smith, Lisa M; Abelson, Mark B

    2015-04-01

    To develop an automated method of grading fluorescein staining that accurately reproduces the clinical grading system currently in use. From the slit lamp photograph of the fluorescein-stained cornea, the region of interest was selected and punctate dot number calculated using software developed with the OpenCV computer vision library. Images (n = 229) were then divided into six incremental severity categories based on computed scores. The final selection of 54 photographs represented the full range of scores: nine images from each of six categories. These were then evaluated by three investigators using a clinical 0 to 4 corneal staining scale. Pearson correlations were calculated to compare investigator scores, and mean investigator and automated scores. Lin's Concordance Correlation Coefficients (CCC) and Bland-Altman plots were used to assess agreement between methods and between investigators. Pearson's correlation between investigators was 0.914; mean CCC between investigators was 0.882. Bland-Altman analysis indicated that scores assessed by investigator 3 were significantly higher than those of investigators 1 and 2 (paired t-test). The predicted grade was calculated to be: Gpred = 1.48log(Ndots) - 0.206. The two-point Pearson's correlation coefficient between the methods was 0.927 (P < 0.0001). The CCC between predicted automated score Gpred and mean investigator score was 0.929, 95% confidence interval (0.884-0.957). Bland-Altman analysis did not indicate bias. The difference in SD between clinical and automated methods was 0.398. An objective, automated analysis of corneal staining provides a quality assurance tool to be used to substantiate clinical grading of key corneal staining endpoints in multicentered clinical trials of dry eye.

  2. Medical informatics and telemedicine: A vision

    NASA Technical Reports Server (NTRS)

    Clemmer, Terry P.

    1991-01-01

    The goal of medical informatics is to improve care. This requires the commitment and harmonious collaboration between the computer scientists and clinicians and an integrated database. The vision described is how medical information systems are going to impact the way medical care is delivered in the future.

  3. Two-dimensional (2D) displacement measurement of moving objects using a new MEMS binocular vision system

    NASA Astrophysics Data System (ADS)

    Di, Si; Lin, Hui; Du, Ruxu

    2011-05-01

    Displacement measurement of moving objects is one of the most important issues in the field of computer vision. This paper introduces a new binocular vision system (BVS) based on micro-electro-mechanical system (MEMS) technology. The eyes of the system are two microlenses fabricated on a substrate by MEMS technology. The imaging results of two microlenses are collected by one complementary metal-oxide-semiconductor (CMOS) array. An algorithm is developed for computing the displacement. Experimental results show that as long as the object is moving in two-dimensional (2D) space, the system can effectively estimate the 2D displacement without camera calibration. It is also shown that the average error of the displacement measurement is about 3.5% at different object distances ranging from 10 cm to 35 cm. Because of its low cost, small size and simple setting, this new method is particularly suitable for 2D displacement measurement applications such as vision-based electronics assembly and biomedical cell culture.

  4. Non-Boolean computing with nanomagnets for computer vision applications

    NASA Astrophysics Data System (ADS)

    Bhanja, Sanjukta; Karunaratne, D. K.; Panchumarthy, Ravi; Rajaram, Srinath; Sarkar, Sudeep

    2016-02-01

    The field of nanomagnetism has recently attracted tremendous attention as it can potentially deliver low-power, high-speed and dense non-volatile memories. It is now possible to engineer the size, shape, spacing, orientation and composition of sub-100 nm magnetic structures. This has spurred the exploration of nanomagnets for unconventional computing paradigms. Here, we harness the energy-minimization nature of nanomagnetic systems to solve the quadratic optimization problems that arise in computer vision applications, which are computationally expensive. By exploiting the magnetization states of nanomagnetic disks as state representations of a vortex and single domain, we develop a magnetic Hamiltonian and implement it in a magnetic system that can identify the salient features of a given image with more than 85% true positive rate. These results show the potential of this alternative computing method to develop a magnetic coprocessor that might solve complex problems in fewer clock cycles than traditional processors.

  5. A clinical study on "Computer vision syndrome" and its management with Triphala eye drops and Saptamrita Lauha.

    PubMed

    Gangamma, M P; Poonam; Rajagopala, Manjusha

    2010-04-01

    American Optometric Association (AOA) defines computer vision syndrome (CVS) as "Complex of eye and vision problems related to near work, which are experienced during or related to computer use". Most studies indicate that Video Display Terminal (VDT) operators report more eye related problems than non-VDT office workers. The causes for the inefficiencies and the visual symptoms are a combination of individual visual problems and poor office ergonomics. In this clinical study on "CVS", 151 patients were registered, out of whom 141 completed the treatment. In Group A, 45 patients had been prescribed Triphala eye drops; in Group B, 53 patients had been prescribed the Triphala eye drops and SaptamritaLauha tablets internally, and in Group C, 43 patients had been prescribed the placebo eye drops and placebo tablets. In total, marked improvement was observed in 48.89, 54.71 and 06.98% patients in groups A, B and C, respectively.

  6. Enhanced flyby science with onboard computer vision: Tracking and surface feature detection at small bodies

    NASA Astrophysics Data System (ADS)

    Fuchs, Thomas J.; Thompson, David R.; Bue, Brian D.; Castillo-Rogez, Julie; Chien, Steve A.; Gharibian, Dero; Wagstaff, Kiri L.

    2015-10-01

    Spacecraft autonomy is crucial to increase the science return of optical remote sensing observations at distant primitive bodies. To date, most small bodies exploration has involved short timescale flybys that execute prescripted data collection sequences. Light time delay means that the spacecraft must operate completely autonomously without direct control from the ground, but in most cases the physical properties and morphologies of prospective targets are unknown before the flyby. Surface features of interest are highly localized, and successful observations must account for geometry and illumination constraints. Under these circumstances onboard computer vision can improve science yield by responding immediately to collected imagery. It can reacquire bad data or identify features of opportunity for additional targeted measurements. We present a comprehensive framework for onboard computer vision for flyby missions at small bodies. We introduce novel algorithms for target tracking, target segmentation, surface feature detection, and anomaly detection. The performance and generalization power are evaluated in detail using expert annotations on data sets from previous encounters with primitive bodies.

  7. Three-camera stereo vision for intelligent transportation systems

    NASA Astrophysics Data System (ADS)

    Bergendahl, Jason; Masaki, Ichiro; Horn, Berthold K. P.

    1997-02-01

    A major obstacle in the application of stereo vision to intelligent transportation system is high computational cost. In this paper, a PC based three-camera stereo vision system constructed with off-the-shelf components is described. The system serves as a tool for developing and testing robust algorithms which approach real-time performance. We present an edge based, subpixel stereo algorithm which is adapted to permit accurate distance measurements to objects in the field of view using a compact camera assembly. Once computed, the 3D scene information may be directly applied to a number of in-vehicle applications, such as adaptive cruise control, obstacle detection, and lane tracking. Moreover, since the largest computational costs is incurred in generating the 3D scene information, multiple applications that leverage this information can be implemented in a single system with minimal cost. On-road applications, such as vehicle counting and incident detection, are also possible. Preliminary in-vehicle road trial results are presented.

  8. Predicting pork loin intramuscular fat using computer vision system.

    PubMed

    Liu, J-H; Sun, X; Young, J M; Bachmeier, L A; Newman, D J

    2018-09-01

    The objective of this study was to investigate the ability of computer vision system to predict pork intramuscular fat percentage (IMF%). Center-cut loin samples (n = 85) were trimmed of subcutaneous fat and connective tissue. Images were acquired and pixels were segregated to estimate image IMF% and 18 image color features for each image. Subjective IMF% was determined by a trained grader. Ether extract IMF% was calculated using ether extract method. Image color features and image IMF% were used as predictors for stepwise regression and support vector machine models. Results showed that subjective IMF% had a correlation of 0.81 with ether extract IMF% while the image IMF% had a 0.66 correlation with ether extract IMF%. Accuracy rates for regression models were 0.63 for stepwise and 0.75 for support vector machine. Although subjective IMF% has shown to have better prediction, results from computer vision system demonstrates the potential of being used as a tool in predicting pork IMF% in the future. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Selective cultivation and rapid detection of Staphylococcus aureus by computer vision.

    PubMed

    Wang, Yong; Yin, Yongguang; Zhang, Chaonan

    2014-03-01

    In this paper, we developed a selective growth medium and a more rapid detection method based on computer vision for selective isolation and identification of Staphylococcus aureus from foods. The selective medium consisted of tryptic soy broth basal medium, 3 inhibitors (NaCl, K2 TeO3 , and phenethyl alcohol), and 2 accelerators (sodium pyruvate and glycine). After 4 h of selective cultivation, bacterial detection was accomplished using computer vision. The total analysis time was 5 h. Compared to the Baird-Parker plate count method, which requires 4 to 5 d, this new detection method offers great time savings. Moreover, our novel method had a correlation coefficient of greater than 0.998 when compared with the Baird-Parker plate count method. The detection range for S. aureus was 10 to 10(7) CFU/mL. Our new, rapid detection method for microorganisms in foods has great potential for routine food safety control and microbiological detection applications. © 2014 Institute of Food Technologists®

  10. InPRO: Automated Indoor Construction Progress Monitoring Using Unmanned Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Hamledari, Hesam

    In this research, an envisioned automated intelligent robotic solution for automated indoor data collection and inspection that employs a series of unmanned aerial vehicles (UAV), entitled "InPRO", is presented. InPRO consists of four stages, namely: 1) automated path planning; 2) autonomous UAV-based indoor inspection; 3) automated computer vision-based assessment of progress; and, 4) automated updating of 4D building information models (BIM). The works presented in this thesis address the third stage of InPRO. A series of computer vision-based methods that automate the assessment of construction progress using images captured at indoor sites are introduced. The proposed methods employ computer vision and machine learning techniques to detect the components of under-construction indoor partitions. In particular, framing (studs), insulation, electrical outlets, and different states of drywall sheets (installing, plastering, and painting) are automatically detected using digital images. High accuracy rates, real-time performance, and operation without a priori information are indicators of the methods' promising performance.

  11. Computer vision uncovers predictors of physical urban change.

    PubMed

    Naik, Nikhil; Kominers, Scott Duke; Raskar, Ramesh; Glaeser, Edward L; Hidalgo, César A

    2017-07-18

    Which neighborhoods experience physical improvements? In this paper, we introduce a computer vision method to measure changes in the physical appearances of neighborhoods from time-series street-level imagery. We connect changes in the physical appearance of five US cities with economic and demographic data and find three factors that predict neighborhood improvement. First, neighborhoods that are densely populated by college-educated adults are more likely to experience physical improvements-an observation that is compatible with the economic literature linking human capital and local success. Second, neighborhoods with better initial appearances experience, on average, larger positive improvements-an observation that is consistent with "tipping" theories of urban change. Third, neighborhood improvement correlates positively with physical proximity to the central business district and to other physically attractive neighborhoods-an observation that is consistent with the "invasion" theories of urban sociology. Together, our results provide support for three classical theories of urban change and illustrate the value of using computer vision methods and street-level imagery to understand the physical dynamics of cities.

  12. Computer vision uncovers predictors of physical urban change

    PubMed Central

    Naik, Nikhil; Kominers, Scott Duke; Raskar, Ramesh; Glaeser, Edward L.; Hidalgo, César A.

    2017-01-01

    Which neighborhoods experience physical improvements? In this paper, we introduce a computer vision method to measure changes in the physical appearances of neighborhoods from time-series street-level imagery. We connect changes in the physical appearance of five US cities with economic and demographic data and find three factors that predict neighborhood improvement. First, neighborhoods that are densely populated by college-educated adults are more likely to experience physical improvements—an observation that is compatible with the economic literature linking human capital and local success. Second, neighborhoods with better initial appearances experience, on average, larger positive improvements—an observation that is consistent with “tipping” theories of urban change. Third, neighborhood improvement correlates positively with physical proximity to the central business district and to other physically attractive neighborhoods—an observation that is consistent with the “invasion” theories of urban sociology. Together, our results provide support for three classical theories of urban change and illustrate the value of using computer vision methods and street-level imagery to understand the physical dynamics of cities. PMID:28684401

  13. 2013 Progress Report -- DOE Joint Genome Institute

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2013-11-01

    In October 2012, we introduced a 10-Year Strategic Vision [http://bit.ly/JGI-Vision] for the Institute. A central focus of this Strategic Vision is to bridge the gap between sequenced genomes and an understanding of biological functions at the organism and ecosystem level. This involves the continued massive-scale generation of sequence data, complemented by orthogonal new capabilities to functionally annotate these large sequence data sets. Our Strategic Vision lays out a path to guide our decisions and ensure that the evolving set of experimental and computational capabilities available to DOE JGI users will continue to enable groundbreaking science.

  14. Dynamic programming and graph algorithms in computer vision.

    PubMed

    Felzenszwalb, Pedro F; Zabih, Ramin

    2011-04-01

    Optimization is a powerful paradigm for expressing and solving problems in a wide range of areas, and has been successfully applied to many vision problems. Discrete optimization techniques are especially interesting since, by carefully exploiting problem structure, they often provide nontrivial guarantees concerning solution quality. In this paper, we review dynamic programming and graph algorithms, and discuss representative examples of how these discrete optimization techniques have been applied to some classical vision problems. We focus on the low-level vision problem of stereo, the mid-level problem of interactive object segmentation, and the high-level problem of model-based recognition.

  15. Vision 20/20: Automation and advanced computing in clinical radiation oncology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, Kevin L., E-mail: kevinmoore@ucsd.edu; Moiseenko, Vitali; Kagadis, George C.

    This Vision 20/20 paper considers what computational advances are likely to be implemented in clinical radiation oncology in the coming years and how the adoption of these changes might alter the practice of radiotherapy. Four main areas of likely advancement are explored: cloud computing, aggregate data analyses, parallel computation, and automation. As these developments promise both new opportunities and new risks to clinicians and patients alike, the potential benefits are weighed against the hazards associated with each advance, with special considerations regarding patient safety under new computational platforms and methodologies. While the concerns of patient safety are legitimate, the authorsmore » contend that progress toward next-generation clinical informatics systems will bring about extremely valuable developments in quality improvement initiatives, clinical efficiency, outcomes analyses, data sharing, and adaptive radiotherapy.« less

  16. Vision 20/20: Automation and advanced computing in clinical radiation oncology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, Kevin L., E-mail: kevinmoore@ucsd.edu; Moiseenko, Vitali; Kagadis, George C.

    2014-01-15

    This Vision 20/20 paper considers what computational advances are likely to be implemented in clinical radiation oncology in the coming years and how the adoption of these changes might alter the practice of radiotherapy. Four main areas of likely advancement are explored: cloud computing, aggregate data analyses, parallel computation, and automation. As these developments promise both new opportunities and new risks to clinicians and patients alike, the potential benefits are weighed against the hazards associated with each advance, with special considerations regarding patient safety under new computational platforms and methodologies. While the concerns of patient safety are legitimate, the authorsmore » contend that progress toward next-generation clinical informatics systems will bring about extremely valuable developments in quality improvement initiatives, clinical efficiency, outcomes analyses, data sharing, and adaptive radiotherapy.« less

  17. Vector disparity sensor with vergence control for active vision systems.

    PubMed

    Barranco, Francisco; Diaz, Javier; Gibaldi, Agostino; Sabatini, Silvio P; Ros, Eduardo

    2012-01-01

    This paper presents an architecture for computing vector disparity for active vision systems as used on robotics applications. The control of the vergence angle of a binocular system allows us to efficiently explore dynamic environments, but requires a generalization of the disparity computation with respect to a static camera setup, where the disparity is strictly 1-D after the image rectification. The interaction between vision and motor control allows us to develop an active sensor that achieves high accuracy of the disparity computation around the fixation point, and fast reaction time for the vergence control. In this contribution, we address the development of a real-time architecture for vector disparity computation using an FPGA device. We implement the disparity unit and the control module for vergence, version, and tilt to determine the fixation point. In addition, two on-chip different alternatives for the vector disparity engines are discussed based on the luminance (gradient-based) and phase information of the binocular images. The multiscale versions of these engines are able to estimate the vector disparity up to 32 fps on VGA resolution images with very good accuracy as shown using benchmark sequences with known ground-truth. The performances in terms of frame-rate, resource utilization, and accuracy of the presented approaches are discussed. On the basis of these results, our study indicates that the gradient-based approach leads to the best trade-off choice for the integration with the active vision system.

  18. Computer vision based nacre thickness measurement of Tahitian pearls

    NASA Astrophysics Data System (ADS)

    Loesdau, Martin; Chabrier, Sébastien; Gabillon, Alban

    2017-03-01

    The Tahitian Pearl is the most valuable export product of French Polynesia contributing with over 61 million Euros to more than 50% of the total export income. To maintain its excellent reputation on the international market, an obligatory quality control for every pearl deemed for exportation has been established by the local government. One of the controlled quality parameters is the pearls nacre thickness. The evaluation is currently done manually by experts that are visually analyzing X-ray images of the pearls. In this article, a computer vision based approach to automate this procedure is presented. Even though computer vision based approaches for pearl nacre thickness measurement exist in the literature, the very specific features of the Tahitian pearl, namely the large shape variety and the occurrence of cavities, have so far not been considered. The presented work closes the. Our method consists of segmenting the pearl from X-ray images with a model-based approach, segmenting the pearls nucleus with an own developed heuristic circle detection and segmenting possible cavities with region growing. Out of the obtained boundaries, the 2-dimensional nacre thickness profile can be calculated. A certainty measurement to consider imaging and segmentation imprecisions is included in the procedure. The proposed algorithms are tested on 298 manually evaluated Tahitian pearls, showing that it is generally possible to automatically evaluate the nacre thickness of Tahitian pearls with computer vision. Furthermore the results show that the automatic measurement is more precise and faster than the manual one.

  19. Computer vision cracks the leaf code

    PubMed Central

    Wilf, Peter; Zhang, Shengping; Chikkerur, Sharat; Little, Stefan A.; Wing, Scott L.; Serre, Thomas

    2016-01-01

    Understanding the extremely variable, complex shape and venation characters of angiosperm leaves is one of the most challenging problems in botany. Machine learning offers opportunities to analyze large numbers of specimens, to discover novel leaf features of angiosperm clades that may have phylogenetic significance, and to use those characters to classify unknowns. Previous computer vision approaches have primarily focused on leaf identification at the species level. It remains an open question whether learning and classification are possible among major evolutionary groups such as families and orders, which usually contain hundreds to thousands of species each and exhibit many times the foliar variation of individual species. Here, we tested whether a computer vision algorithm could use a database of 7,597 leaf images from 2,001 genera to learn features of botanical families and orders, then classify novel images. The images are of cleared leaves, specimens that are chemically bleached, then stained to reveal venation. Machine learning was used to learn a codebook of visual elements representing leaf shape and venation patterns. The resulting automated system learned to classify images into families and orders with a success rate many times greater than chance. Of direct botanical interest, the responses of diagnostic features can be visualized on leaf images as heat maps, which are likely to prompt recognition and evolutionary interpretation of a wealth of novel morphological characters. With assistance from computer vision, leaves are poised to make numerous new contributions to systematic and paleobotanical studies. PMID:26951664

  20. Vector Disparity Sensor with Vergence Control for Active Vision Systems

    PubMed Central

    Barranco, Francisco; Diaz, Javier; Gibaldi, Agostino; Sabatini, Silvio P.; Ros, Eduardo

    2012-01-01

    This paper presents an architecture for computing vector disparity for active vision systems as used on robotics applications. The control of the vergence angle of a binocular system allows us to efficiently explore dynamic environments, but requires a generalization of the disparity computation with respect to a static camera setup, where the disparity is strictly 1-D after the image rectification. The interaction between vision and motor control allows us to develop an active sensor that achieves high accuracy of the disparity computation around the fixation point, and fast reaction time for the vergence control. In this contribution, we address the development of a real-time architecture for vector disparity computation using an FPGA device. We implement the disparity unit and the control module for vergence, version, and tilt to determine the fixation point. In addition, two on-chip different alternatives for the vector disparity engines are discussed based on the luminance (gradient-based) and phase information of the binocular images. The multiscale versions of these engines are able to estimate the vector disparity up to 32 fps on VGA resolution images with very good accuracy as shown using benchmark sequences with known ground-truth. The performances in terms of frame-rate, resource utilization, and accuracy of the presented approaches are discussed. On the basis of these results, our study indicates that the gradient-based approach leads to the best trade-off choice for the integration with the active vision system. PMID:22438737

  1. Investigation of safety analysis methods using computer vision techniques

    NASA Astrophysics Data System (ADS)

    Shirazi, Mohammad Shokrolah; Morris, Brendan Tran

    2017-09-01

    This work investigates safety analysis methods using computer vision techniques. The vision-based tracking system is developed to provide the trajectory of road users including vehicles and pedestrians. Safety analysis methods are developed to estimate time to collision (TTC) and postencroachment time (PET) that are two important safety measurements. Corresponding algorithms are presented and their advantages and drawbacks are shown through their success in capturing the conflict events in real time. The performance of the tracking system is evaluated first, and probability density estimation of TTC and PET are shown for 1-h monitoring of a Las Vegas intersection. Finally, an idea of an intersection safety map is introduced, and TTC values of two different intersections are estimated for 1 day from 8:00 a.m. to 6:00 p.m.

  2. Neo-Symbiosis: The Next Stage in the Evolution of Human Information Interaction.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Griffith, Douglas; Greitzer, Frank L.

    In his 1960 paper Man-Machine Symbiosis, Licklider predicted that human brains and computing machines will be coupled in a tight partnership that will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today. Today we are on the threshold of resurrecting the vision of symbiosis. While Licklider’s original vision suggested a co-equal relationship, here we discuss an updated vision, neo-symbiosis, in which the human holds a superordinate position in an intelligent human-computer collaborative environment. This paper was originally published as a journal article and is being publishedmore » as a chapter in an upcoming book series, Advances in Novel Approaches in Cognitive Informatics and Natural Intelligence.« less

  3. Head pose estimation in computer vision: a survey.

    PubMed

    Murphy-Chutorian, Erik; Trivedi, Mohan Manubhai

    2009-04-01

    The capacity to estimate the head pose of another person is a common human ability that presents a unique challenge for computer vision systems. Compared to face detection and recognition, which have been the primary foci of face-related vision research, identity-invariant head pose estimation has fewer rigorously evaluated systems or generic solutions. In this paper, we discuss the inherent difficulties in head pose estimation and present an organized survey describing the evolution of the field. Our discussion focuses on the advantages and disadvantages of each approach and spans 90 of the most innovative and characteristic papers that have been published on this topic. We compare these systems by focusing on their ability to estimate coarse and fine head pose, highlighting approaches that are well suited for unconstrained environments.

  4. A Vision-Based Motion Sensor for Undergraduate Laboratories.

    ERIC Educational Resources Information Center

    Salumbides, Edcel John; Maristela, Joyce; Uy, Alfredson; Karremans, Kees

    2002-01-01

    Introduces an alternative method to determine the mechanics of a moving object that uses computer vision algorithms with a charge-coupled device (CCD) camera as a recording device. Presents two experiments, pendulum motion and terminal velocity, to compare results of the alternative and conventional methods. (YDS)

  5. Smartphones as image processing systems for prosthetic vision.

    PubMed

    Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Suaning, Gregg J

    2013-01-01

    The feasibility of implants for prosthetic vision has been demonstrated by research and commercial organizations. In most devices, an essential forerunner to the internal stimulation circuit is an external electronics solution for capturing, processing and relaying image information as well as extracting useful features from the scene surrounding the patient. The capabilities and multitude of image processing algorithms that can be performed by the device in real-time plays a major part in the final quality of the prosthetic vision. It is therefore optimal to use powerful hardware yet to avoid bulky, straining solutions. Recent publications have reported of portable single-board computers fast enough for computationally intensive image processing. Following the rapid evolution of commercial, ultra-portable ARM (Advanced RISC machine) mobile devices, the authors investigated the feasibility of modern smartphones running complex face detection as external processing devices for vision implants. The role of dedicated graphics processors in speeding up computation was evaluated while performing a demanding noise reduction algorithm (image denoising). The time required for face detection was found to decrease by 95% from 2.5 year old to recent devices. In denoising, graphics acceleration played a major role, speeding up denoising by a factor of 18. These results demonstrate that the technology has matured sufficiently to be considered as a valid external electronics platform for visual prosthetic research.

  6. Fast ray-tracing of human eye optics on Graphics Processing Units.

    PubMed

    Wei, Qi; Patkar, Saket; Pai, Dinesh K

    2014-05-01

    We present a new technique for simulating retinal image formation by tracing a large number of rays from objects in three dimensions as they pass through the optic apparatus of the eye to objects. Simulating human optics is useful for understanding basic questions of vision science and for studying vision defects and their corrections. Because of the complexity of computing such simulations accurately, most previous efforts used simplified analytical models of the normal eye. This makes them less effective in modeling vision disorders associated with abnormal shapes of the ocular structures which are hard to be precisely represented by analytical surfaces. We have developed a computer simulator that can simulate ocular structures of arbitrary shapes, for instance represented by polygon meshes. Topographic and geometric measurements of the cornea, lens, and retina from keratometer or medical imaging data can be integrated for individualized examination. We utilize parallel processing using modern Graphics Processing Units (GPUs) to efficiently compute retinal images by tracing millions of rays. A stable retinal image can be generated within minutes. We simulated depth-of-field, accommodation, chromatic aberrations, as well as astigmatism and correction. We also show application of the technique in patient specific vision correction by incorporating geometric models of the orbit reconstructed from clinical medical images. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  7. Multiscale Methods, Parallel Computation, and Neural Networks for Real-Time Computer Vision.

    NASA Astrophysics Data System (ADS)

    Battiti, Roberto

    1990-01-01

    This thesis presents new algorithms for low and intermediate level computer vision. The guiding ideas in the presented approach are those of hierarchical and adaptive processing, concurrent computation, and supervised learning. Processing of the visual data at different resolutions is used not only to reduce the amount of computation necessary to reach the fixed point, but also to produce a more accurate estimation of the desired parameters. The presented adaptive multiple scale technique is applied to the problem of motion field estimation. Different parts of the image are analyzed at a resolution that is chosen in order to minimize the error in the coefficients of the differential equations to be solved. Tests with video-acquired images show that velocity estimation is more accurate over a wide range of motion with respect to the homogeneous scheme. In some cases introduction of explicit discontinuities coupled to the continuous variables can be used to avoid propagation of visual information from areas corresponding to objects with different physical and/or kinematic properties. The human visual system uses concurrent computation in order to process the vast amount of visual data in "real -time." Although with different technological constraints, parallel computation can be used efficiently for computer vision. All the presented algorithms have been implemented on medium grain distributed memory multicomputers with a speed-up approximately proportional to the number of processors used. A simple two-dimensional domain decomposition assigns regions of the multiresolution pyramid to the different processors. The inter-processor communication needed during the solution process is proportional to the linear dimension of the assigned domain, so that efficiency is close to 100% if a large region is assigned to each processor. Finally, learning algorithms are shown to be a viable technique to engineer computer vision systems for different applications starting from multiple-purpose modules. In the last part of the thesis a well known optimization method (the Broyden-Fletcher-Goldfarb-Shanno memoryless quasi -Newton method) is applied to simple classification problems and shown to be superior to the "error back-propagation" algorithm for numerical stability, automatic selection of parameters, and convergence properties.

  8. Modern Approaches to the Computation of the Probability of Target Detection in Cluttered Environments

    NASA Astrophysics Data System (ADS)

    Meitzler, Thomas J.

    The field of computer vision interacts with fields such as psychology, vision research, machine vision, psychophysics, mathematics, physics, and computer science. The focus of this thesis is new algorithms and methods for the computation of the probability of detection (Pd) of a target in a cluttered scene. The scene can be either a natural visual scene such as one sees with the naked eye (visual), or, a scene displayed on a monitor with the help of infrared sensors. The relative clutter and the temperature difference between the target and background (DeltaT) are defined and then used to calculate a relative signal -to-clutter ratio (SCR) from which the Pd is calculated for a target in a cluttered scene. It is shown how this definition can include many previous definitions of clutter and (DeltaT). Next, fuzzy and neural -fuzzy techniques are used to calculate the Pd and it is shown how these methods can give results that have a good correlation with experiment. The experimental design for actually measuring the Pd of a target by observers is described. Finally, wavelets are applied to the calculation of clutter and it is shown how this new definition of clutter based on wavelets can be used to compute the Pd of a target.

  9. Novel techniques for data decomposition and load balancing for parallel processing of vision systems: Implementation and evaluation using a motion estimation system

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.

    1989-01-01

    Computer vision systems employ a sequence of vision algorithms in which the output of an algorithm is the input of the next algorithm in the sequence. Algorithms that constitute such systems exhibit vastly different computational characteristics, and therefore, require different data decomposition techniques and efficient load balancing techniques for parallel implementation. However, since the input data for a task is produced as the output data of the previous task, this information can be exploited to perform knowledge based data decomposition and load balancing. Presented here are algorithms for a motion estimation system. The motion estimation is based on the point correspondence between the involved images which are a sequence of stereo image pairs. Researchers propose algorithms to obtain point correspondences by matching feature points among stereo image pairs at any two consecutive time instants. Furthermore, the proposed algorithms employ non-iterative procedures, which results in saving considerable amounts of computation time. The system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from consecutive time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters.

  10. Image-Based Modeling Techniques for Architectural Heritage 3d Digitalization: Limits and Potentialities

    NASA Astrophysics Data System (ADS)

    Santagati, C.; Inzerillo, L.; Di Paola, F.

    2013-07-01

    3D reconstruction from images has undergone a revolution in the last few years. Computer vision techniques use photographs from data set collection to rapidly build detailed 3D models. The simultaneous applications of different algorithms (MVS), the different techniques of image matching, feature extracting and mesh optimization are inside an active field of research in computer vision. The results are promising: the obtained models are beginning to challenge the precision of laser-based reconstructions. Among all the possibilities we can mainly distinguish desktop and web-based packages. Those last ones offer the opportunity to exploit the power of cloud computing in order to carry out a semi-automatic data processing, thus allowing the user to fulfill other tasks on its computer; whereas desktop systems employ too much processing time and hard heavy approaches. Computer vision researchers have explored many applications to verify the visual accuracy of 3D model but the approaches to verify metric accuracy are few and no one is on Autodesk 123D Catch applied on Architectural Heritage Documentation. Our approach to this challenging problem is to compare the 3Dmodels by Autodesk 123D Catch and 3D models by terrestrial LIDAR considering different object size, from the detail (capitals, moldings, bases) to large scale buildings for practitioner purpose.

  11. Dynamic displacement measurement of large-scale structures based on the Lucas-Kanade template tracking algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Jie; Zhu, Chang`an

    2016-01-01

    The development of optics and computer technologies enables the application of the vision-based technique that uses digital cameras to the displacement measurement of large-scale structures. Compared with traditional contact measurements, vision-based technique allows for remote measurement, has a non-intrusive characteristic, and does not necessitate mass introduction. In this study, a high-speed camera system is developed to complete the displacement measurement in real time. The system consists of a high-speed camera and a notebook computer. The high-speed camera can capture images at a speed of hundreds of frames per second. To process the captured images in computer, the Lucas-Kanade template tracking algorithm in the field of computer vision is introduced. Additionally, a modified inverse compositional algorithm is proposed to reduce the computing time of the original algorithm and improve the efficiency further. The modified algorithm can rapidly accomplish one displacement extraction within 1 ms without having to install any pre-designed target panel onto the structures in advance. The accuracy and the efficiency of the system in the remote measurement of dynamic displacement are demonstrated in the experiments on motion platform and sound barrier on suspension viaduct. Experimental results show that the proposed algorithm can extract accurate displacement signal and accomplish the vibration measurement of large-scale structures.

  12. Evaluation of tablet computers for visual function assessment.

    PubMed

    Bodduluri, Lakshmi; Boon, Mei Ying; Dain, Stephen J

    2017-04-01

    Recent advances in technology and the increased use of tablet computers for mobile health applications such as vision testing necessitate an understanding of the behavior of the displays of such devices, to facilitate the reproduction of existing or the development of new vision assessment tests. The purpose of this study was to investigate the physical characteristics of one model of tablet computer (iPad mini Retina display) with regard to display consistency across a set of devices (15) and their potential application as clinical vision assessment tools. Once the tablet computer was switched on, it required about 13 min to reach luminance stability, while chromaticity remained constant. The luminance output of the device remained stable until a battery level of 5%. Luminance varied from center to peripheral locations of the display and with viewing angle, whereas the chromaticity did not vary. A minimal (1%) variation in luminance was observed due to temperature, and once again chromaticity remained constant. Also, these devices showed good temporal stability of luminance and chromaticity. All 15 tablet computers showed gamma functions approximating the standard gamma (2.20) and showed similar color gamut sizes, except for the blue primary, which displayed minimal variations. The physical characteristics across the 15 devices were similar and are known, thereby facilitating the use of this model of tablet computer as visual stimulus displays.

  13. A vision-based end-point control for a two-link flexible manipulator. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Obergfell, Klaus

    1991-01-01

    The measurement and control of the end-effector position of a large two-link flexible manipulator are investigated. The system implementation is described and an initial algorithm for static end-point positioning is discussed. Most existing robots are controlled through independent joint controllers, while the end-effector position is estimated from the joint positions using a kinematic relation. End-point position feedback can be used to compensate for uncertainty and structural deflections. Such feedback is especially important for flexible robots. Computer vision is utilized to obtain end-point position measurements. A look-and-move control structure alleviates the disadvantages of the slow and variable computer vision sampling frequency. This control structure consists of an inner joint-based loop and an outer vision-based loop. A static positioning algorithm was implemented and experimentally verified. This algorithm utilizes the manipulator Jacobian to transform a tip position error to a joint error. The joint error is then used to give a new reference input to the joint controller. The convergence of the algorithm is demonstrated experimentally under payload variation. A Landmark Tracking System (Dickerson, et al 1990) is used for vision-based end-point measurements. This system was modified and tested. A real-time control system was implemented on a PC and interfaced with the vision system and the robot.

  14. Reaction time for processing visual stimulus in a computer-assisted rehabilitation environment.

    PubMed

    Sanchez, Yerly; Pinzon, David; Zheng, Bin

    2017-10-01

    To examine the reaction time when human subjects process information presented in the visual channel under both a direct vision and a virtual rehabilitation environment when walking was performed. Visual stimulus included eight math problems displayed on the peripheral vision to seven healthy human subjects in a virtual rehabilitation training (computer-assisted rehabilitation environment (CAREN)) and a direct vision environment. Subjects were required to verbally report the results of these math calculations in a short period of time. Reaction time measured by Tobii Eye tracker and calculation accuracy were recorded and compared between the direct vision and virtual rehabilitation environment. Performance outcomes measured for both groups included reaction time, reading time, answering time and the verbal answer score. A significant difference between the groups was only found for the reaction time (p = .004). Participants had more difficulty recognizing the first equation of the virtual environment. Participants reaction time was faster in the direct vision environment. This reaction time delay should be kept in mind when designing skill training scenarios in virtual environments. This was a pilot project to a series of studies assessing cognition ability of stroke patients who are undertaking a rehabilitation program with a virtual training environment. Implications for rehabilitation Eye tracking is a reliable tool that can be employed in rehabilitation virtual environments. Reaction time changes between direct vision and virtual environment.

  15. Comments on "Including the effects of temperature-dependent opacities in the implicit Monte Carlo algorithm" by N.A. Gentile [J. Comput. Phys. 230 (2011) 5100-5114

    NASA Astrophysics Data System (ADS)

    Ghosh, Karabi

    2017-02-01

    We briefly comment on a paper by N.A. Gentile [J. Comput. Phys. 230 (2011) 5100-5114] in which the Fleck factor has been modified to include the effects of temperature-dependent opacities in the implicit Monte Carlo algorithm developed by Fleck and Cummings [1,2]. Instead of the Fleck factor, f = 1 / (1 + βcΔtσP), the author derived the modified Fleck factor g = 1 / (1 + βcΔtσP - min [σP‧ (aTr4 - aT4)cΔt/ρCV, 0 ]) to be used in the Implicit Monte Carlo (IMC) algorithm in order to obtain more accurate solutions with much larger time steps. Here β = 4 aT3 / ρCV, σP is the Planck opacity and the derivative of Planck opacity w.r.t. the material temperature is σP‧ = dσP / dT.

  16. Continuous Variable Cluster State Generation over the Optical Spatial Mode Comb

    DOE PAGES

    Pooser, Raphael C.; Jing, Jietai

    2014-10-20

    One way quantum computing uses single qubit projective measurements performed on a cluster state (a highly entangled state of multiple qubits) in order to enact quantum gates. The model is promising due to its potential scalability; the cluster state may be produced at the beginning of the computation and operated on over time. Continuous variables (CV) offer another potential benefit in the form of deterministic entanglement generation. This determinism can lead to robust cluster states and scalable quantum computation. Recent demonstrations of CV cluster states have made great strides on the path to scalability utilizing either time or frequency multiplexingmore » in optical parametric oscillators (OPO) both above and below threshold. The techniques relied on a combination of entangling operators and beam splitter transformations. Here we show that an analogous transformation exists for amplifiers with Gaussian inputs states operating on multiple spatial modes. By judicious selection of local oscillators (LOs), the spatial mode distribution is analogous to the optical frequency comb consisting of axial modes in an OPO cavity. We outline an experimental system that generates cluster states across the spatial frequency comb which can also scale the amount of quantum noise reduction to potentially larger than in other systems.« less

  17. The loss and recovery of vertebrate vision examined in microplates.

    PubMed

    Thorn, Robert J; Clift, Danielle E; Ojo, Oladele; Colwill, Ruth M; Creton, Robbert

    2017-01-01

    Regenerative medicine offers potentially ground-breaking treatments of blindness and low vision. However, as new methodologies are developed, a critical question will need to be addressed: how do we monitor in vivo for functional success? In the present study, we developed novel behavioral assays to examine vision in a vertebrate model system. In the assays, zebrafish larvae are imaged in multiwell or multilane plates while various red, green, blue, yellow or cyan objects are presented to the larvae on a computer screen. The assays were used to examine a loss of vision at 4 or 5 days post-fertilization and a gradual recovery of vision in subsequent days. The developed assays are the first to measure the loss and recovery of vertebrate vision in microplates and provide an efficient platform to evaluate novel treatments of visual impairment.

  18. Computer Vision Techniques for Transcatheter Intervention

    PubMed Central

    Zhao, Feng; Roach, Matthew

    2015-01-01

    Minimally invasive transcatheter technologies have demonstrated substantial promise for the diagnosis and the treatment of cardiovascular diseases. For example, transcatheter aortic valve implantation is an alternative to aortic valve replacement for the treatment of severe aortic stenosis, and transcatheter atrial fibrillation ablation is widely used for the treatment and the cure of atrial fibrillation. In addition, catheter-based intravascular ultrasound and optical coherence tomography imaging of coronary arteries provides important information about the coronary lumen, wall, and plaque characteristics. Qualitative and quantitative analysis of these cross-sectional image data will be beneficial to the evaluation and the treatment of coronary artery diseases such as atherosclerosis. In all the phases (preoperative, intraoperative, and postoperative) during the transcatheter intervention procedure, computer vision techniques (e.g., image segmentation and motion tracking) have been largely applied in the field to accomplish tasks like annulus measurement, valve selection, catheter placement control, and vessel centerline extraction. This provides beneficial guidance for the clinicians in surgical planning, disease diagnosis, and treatment assessment. In this paper, we present a systematical review on these state-of-the-art methods. We aim to give a comprehensive overview for researchers in the area of computer vision on the subject of transcatheter intervention. Research in medical computing is multi-disciplinary due to its nature, and hence, it is important to understand the application domain, clinical background, and imaging modality, so that methods and quantitative measurements derived from analyzing the imaging data are appropriate and meaningful. We thus provide an overview on the background information of the transcatheter intervention procedures, as well as a review of the computer vision techniques and methodologies applied in this area. PMID:27170893

  19. Image segmentation for enhancing symbol recognition in prosthetic vision.

    PubMed

    Horne, Lachlan; Barnes, Nick; McCarthy, Chris; He, Xuming

    2012-01-01

    Current and near-term implantable prosthetic vision systems offer the potential to restore some visual function, but suffer from poor resolution and dynamic range of induced phosphenes. This can make it difficult for users of prosthetic vision systems to identify symbolic information (such as signs) except in controlled conditions. Using image segmentation techniques from computer vision, we show it is possible to improve the clarity of such symbolic information for users of prosthetic vision implants in uncontrolled conditions. We use image segmentation to automatically divide a natural image into regions, and using a fixation point controlled by the user, select a region to phosphenize. This technique improves the apparent contrast and clarity of symbolic information over traditional phosphenization approaches.

  20. Information Weighted Consensus for Distributed Estimation in Vision Networks

    ERIC Educational Resources Information Center

    Kamal, Ahmed Tashrif

    2013-01-01

    Due to their high fault-tolerance, ease of installation and scalability to large networks, distributed algorithms have recently gained immense popularity in the sensor networks community, especially in computer vision. Multi-target tracking in a camera network is one of the fundamental problems in this domain. Distributed estimation algorithms…

  1. Bootstrapping and Maintaining Trust in the Cloud

    DTIC Science & Technology

    2016-03-16

    of infrastructure-as-a- service (IaaS) cloud computing services such as Ama- zon Web Services, Google Compute Engine, Rackspace, et. al. means that...Implementation We implemented keylime in ∼3.2k lines of Python in four components: registrar, node, CV, and tenant. The registrar offers a REST-based web ...bootstrap key K. It provides an unencrypted REST-based web service for these two functions. As described earlier, the pro- tocols for exchanging data

  2. Hyperbolic Harmonic Mapping for Surface Registration

    PubMed Central

    Shi, Rui; Zeng, Wei; Su, Zhengyu; Jiang, Jian; Damasio, Hanna; Lu, Zhonglin; Wang, Yalin; Yau, Shing-Tung; Gu, Xianfeng

    2016-01-01

    Automatic computation of surface correspondence via harmonic map is an active research field in computer vision, computer graphics and computational geometry. It may help document and understand physical and biological phenomena and also has broad applications in biometrics, medical imaging and motion capture inducstries. Although numerous studies have been devoted to harmonic map research, limited progress has been made to compute a diffeomorphic harmonic map on general topology surfaces with landmark constraints. This work conquers this problem by changing the Riemannian metric on the target surface to a hyperbolic metric so that the harmonic mapping is guaranteed to be a diffeomorphism under landmark constraints. The computational algorithms are based on Ricci flow and nonlinear heat diffusion methods. The approach is general and robust. We employ our algorithm to study the constrained surface registration problem which applies to both computer vision and medical imaging applications. Experimental results demonstrate that, by changing the Riemannian metric, the registrations are always diffeomorphic and achieve relatively high performance when evaluated with some popular surface registration evaluation standards. PMID:27187948

  3. Factors leading to the computer vision syndrome: an issue at the contemporary workplace.

    PubMed

    Izquierdo, Juan C; García, Maribel; Buxó, Carmen; Izquierdo, Natalio J

    2007-01-01

    Vision and eye related problems are common among computer users, and have been collectively called the Computer Vision Syndrome (CVS). An observational study in order to identify the risk factors leading to the CVS was done. Twenty-eight participants answered a validated questionnaire, and had their workstations examined. The questionnaire evaluated personal, environmental, ergonomic factors, and physiologic response of computer users. The distance from the eye to the computers' monitor (A), the computers' monitor height (B), and visual axis height (C) were measured. The difference between B and C was calculated and labeled as D. Angles of gaze to the computer monitor were calculated using the formula: angle=tan-1(D/A). Angles were divided into two groups: participants with angles of gaze ranging from 0 degree to 13.9 degrees were included in Group 1; and participants gazing at angles larger than 14 degrees were included in Group 2. Statistical analysis of the evaluated variables was made. Computer users in both groups used more tear supplements (as part of the syndrome) than expected. This association was statistically significant (p < 0.10). Participants in Group 1 reported more pain than participants in Group 2. Associations between the CVS and other personal or ergonomic variables were not statistically significant. Our findings show that the most important factor leading to the syndrome is the angle of gaze at the computer monitor. Pain in computer users is diminished when gazing downwards at angles of 14 degrees or more. The CVS remains an under estimated and poorly understood issue at the workplace. The general public, health professionals, the government, and private industries need to be educated about the CVS.

  4. Factors leading to the Computer Vision Syndrome: an issue at the contemporary workplace.

    PubMed

    Izquierdo, Juan C; García, Maribel; Buxó, Carmen; Izquierdo, Natalio J

    2004-01-01

    Vision and eye related problems are common among computer users, and have been collectively called the Computer Vision Syndrome (CVS). An observational study in order to identify the risk factors leading to the CVS was done. Twenty-eight participants answered a validated questionnaire, and had their workstations examined. The questionnaire evaluated personal, environmental, ergonomic factors, and physiologic response of computer users. The distance from the eye to the computers' monitor (A), the computers' monitor height (B), and visual axis height (C) were measured. The difference between B and C was calculated and labeled as D. Angles of gaze to the computer monitor were calculated using the formula: angle=tan(-1)(D/ A). Angles were divided into two groups: participants with angles of gaze ranging from 0 degrees to 13.9 degrees were included in Group 1; and participants gazing at angles larger than 14 degrees were included in Group 2. Statistical analysis of the evaluated variables was made. Computer users in both groups used more tear supplements (as part of the syndrome) than expected. This association was statistically significant (p<0.10). Participants in Group 1 reported more pain than participants in Group 2. Associations between the CVS and other personal or ergonomic variables were not statistically significant. Our findings show that most important factor leading to the syndrome is the angle of gaze at the computer monitor. Pain in computer users is diminished when gazing downwards at angles of 14 degrees or more. The CVS remains an under estimated and poorly understood issue at the workplace. The general public, health professionals, the government, and private industries need to be educated about the CVS.

  5. Computer vision for microscopy diagnosis of malaria.

    PubMed

    Tek, F Boray; Dempster, Andrew G; Kale, Izzet

    2009-07-13

    This paper reviews computer vision and image analysis studies aiming at automated diagnosis or screening of malaria infection in microscope images of thin blood film smears. Existing works interpret the diagnosis problem differently or propose partial solutions to the problem. A critique of these works is furnished. In addition, a general pattern recognition framework to perform diagnosis, which includes image acquisition, pre-processing, segmentation, and pattern classification components, is described. The open problems are addressed and a perspective of the future work for realization of automated microscopy diagnosis of malaria is provided.

  6. A Vision on the Status and Evolution of HEP Physics Software Tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Canal, P.; Elvira, D.; Hatcher, R.

    2013-07-28

    This paper represents the vision of the members of the Fermilab Scientific Computing Division's Computational Physics Department (SCD-CPD) on the status and the evolution of various HEP software tools such as the Geant4 detector simulation toolkit, the Pythia and GENIE physics generators, and the ROOT data analysis framework. The goal of this paper is to contribute ideas to the Snowmass 2013 process toward the composition of a unified document on the current status and potential evolution of the physics software tools which are essential to HEP.

  7. Computing motion using resistive networks

    NASA Technical Reports Server (NTRS)

    Koch, Christof; Luo, Jin; Mead, Carver; Hutchinson, James

    1988-01-01

    Recent developments in the theory of early vision are described which lead from the formulation of the motion problem as an ill-posed one to its solution by minimizing certain 'cost' functions. These cost or energy functions can be mapped onto simple analog and digital resistive networks. It is shown how the optical flow can be computed by injecting currents into resistive networks and recording the resulting stationary voltage distribution at each node. These networks can be implemented in cMOS VLSI circuits and represent plausible candidates for biological vision systems.

  8. Overview of Key Saturn Probe Mission Trades

    NASA Technical Reports Server (NTRS)

    Balint, Tibor S.; Kowalkowski, Theresa; Folkner, Bill

    2007-01-01

    Ongoing studies, performed at NASA/JPL over the past two years in support of NASA's SSE Roadmap activities, proved the feasibility of a NF class Saturn probe mission. I. This proposed mission could also provide a good opportunity for international collaboration with the proposed Cosmic Vision KRONOS mission: a) With ESA contributed probes (descent modules) on a NASA lead mission; b) Early 2017 launch could be a good programmatic option for ESA-CV/NASA-NF. II. A number of mission architectures could be suitable for this mission: a) Probe Relay based architecture with short flight time (approx. 6.3-7 years); b) DTE probe telecom based architecture with long flight time (-11 years), and low probe data rate, but with the probes decoupled from the carrier, allowing for polar trajectories I orbiter. This option may need technology development for telecom; c) Orbiter would likely impact mission cost over flyby, but would provide significantly higher science return. The Saturn probes mission is expected to be identified in NASA's New Frontiers AO. Thus, further studies are recommended to refine the most suitable architecture. International collaboration is started through the KRONOS proposal work; further collaborated studies will follow once KRONOS is selected in October under ESA's Cosmic Vision Program.

  9. Image/video understanding systems based on network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2004-03-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/network models is found. Symbols, predicates and grammars naturally emerge in such networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type relational structure created via multilevel hierarchical compression of visual information. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. Spatial logic and topology naturally present in such structures. Mid-level vision processes like perceptual grouping, separation of figure from ground, are special kinds of network transformations. They convert primary image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models combines learning, classification, and analogy together with higher-level model-based reasoning into a single framework, and it works similar to frames and agents. Computational intelligence methods transform images into model-based knowledge representation. Based on such principles, an Image/Video Understanding system can convert images into the knowledge models, and resolve uncertainty and ambiguity. This allows creating intelligent computer vision systems for design and manufacturing.

  10. High End Computing Technologies for Earth Science Applications: Trends, Challenges, and Innovations

    NASA Technical Reports Server (NTRS)

    Parks, John (Technical Monitor); Biswas, Rupak; Yan, Jerry C.; Brooks, Walter F.; Sterling, Thomas L.

    2003-01-01

    Earth science applications of the future will stress the capabilities of even the highest performance supercomputers in the areas of raw compute power, mass storage management, and software environments. These NASA mission critical problems demand usable multi-petaflops and exabyte-scale systems to fully realize their science goals. With an exciting vision of the technologies needed, NASA has established a comprehensive program of advanced research in computer architecture, software tools, and device technology to ensure that, in partnership with US industry, it can meet these demanding requirements with reliable, cost effective, and usable ultra-scale systems. NASA will exploit, explore, and influence emerging high end computing architectures and technologies to accelerate the next generation of engineering, operations, and discovery processes for NASA Enterprises. This article captures this vision and describes the concepts, accomplishments, and the potential payoff of the key thrusts that will help meet the computational challenges in Earth science applications.

  11. A programmable computational image sensor for high-speed vision

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Shi, Cong; Long, Xitian; Wu, Nanjian

    2013-08-01

    In this paper we present a programmable computational image sensor for high-speed vision. This computational image sensor contains four main blocks: an image pixel array, a massively parallel processing element (PE) array, a row processor (RP) array and a RISC core. The pixel-parallel PE is responsible for transferring, storing and processing image raw data in a SIMD fashion with its own programming language. The RPs are one dimensional array of simplified RISC cores, it can carry out complex arithmetic and logic operations. The PE array and RP array can finish great amount of computation with few instruction cycles and therefore satisfy the low- and middle-level high-speed image processing requirement. The RISC core controls the whole system operation and finishes some high-level image processing algorithms. We utilize a simplified AHB bus as the system bus to connect our major components. Programming language and corresponding tool chain for this computational image sensor are also developed.

  12. Computer Vision System For Locating And Identifying Defects In Hardwood Lumber

    NASA Astrophysics Data System (ADS)

    Conners, Richard W.; Ng, Chong T.; Cho, Tai-Hoon; McMillin, Charles W.

    1989-03-01

    This paper describes research aimed at developing an automatic cutup system for use in the rough mills of the hardwood furniture and fixture industry. In particular, this paper describes attempts to create the vision system that will power this automatic cutup system. There are a number of factors that make the development of such a vision system a challenge. First there is the innate variability of the wood material itself. No two species look exactly the same, in fact, they can have a significant visual difference in appearance among species. Yet a truly robust vision system must be able to handle a variety of such species, preferably with no operator intervention required when changing from one species to another. Secondly, there is a good deal of variability in the definition of what constitutes a removable defect. The hardwood furniture and fixture industry is diverse in the nature of the products that it makes. The products range from hardwood flooring to fancy hardwood furniture, from simple mill work to kitchen cabinets. Thus depending on the manufacturer, the product, and the quality of the product the nature of what constitutes a removable defect can and does vary. The vision system must be such that it can be tailored to meet each of these unique needs, preferably without any additional program modifications. This paper will describe the vision system that has been developed. It will assess the current system capabilities, and it will discuss the directions for future research. It will be argued that artificial intelligence methods provide a natural mechanism for attacking this computer vision application.

  13. Vision-based algorithms for near-host object detection and multilane sensing

    NASA Astrophysics Data System (ADS)

    Kenue, Surender K.

    1995-01-01

    Vision-based sensing can be used for lane sensing, adaptive cruise control, collision warning, and driver performance monitoring functions of intelligent vehicles. Current computer vision algorithms are not robust for handling multiple vehicles in highway scenarios. Several new algorithms are proposed for multi-lane sensing, near-host object detection, vehicle cut-in situations, and specifying regions of interest for object tracking. These algorithms were tested successfully on more than 6000 images taken from real-highway scenes under different daytime lighting conditions.

  14. Colour vision abnormality as the only manifestation of normal pressure hydrocephalus.

    PubMed

    Asensio-Sánchez, V M; Martín-Prieto, A

    2018-01-01

    The case is presented of a 73-year-old male patient who referred to having black and white vision. Computed tomography showed normal pressure hydrocephalus (NPH). Magnetic resonance imaging was not performed because the patient refused to undergo further examinations. Achromatopsia may be the first or only NPH symptom. It may be prudent to ask patients with NPH regarding colour vision. Copyright © 2017 Sociedad Española de Oftalmología. Publicado por Elsevier España, S.L.U. All rights reserved.

  15. Vision-based navigation in a dynamic environment for virtual human

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Sun, Ji-Zhou; Zhang, Jia-Wan; Li, Ming-Chu

    2004-06-01

    Intelligent virtual human is widely required in computer games, ergonomics software, virtual environment and so on. We present a vision-based behavior modeling method to realize smart navigation in a dynamic environment. This behavior model can be divided into three modules: vision, global planning and local planning. Vision is the only channel for smart virtual actor to get information from the outside world. Then, the global and local planning module use A* and D* algorithm to find a way for virtual human in a dynamic environment. Finally, the experiments on our test platform (Smart Human System) verify the feasibility of this behavior model.

  16. Dynamic Programming and Graph Algorithms in Computer Vision*

    PubMed Central

    Felzenszwalb, Pedro F.; Zabih, Ramin

    2013-01-01

    Optimization is a powerful paradigm for expressing and solving problems in a wide range of areas, and has been successfully applied to many vision problems. Discrete optimization techniques are especially interesting, since by carefully exploiting problem structure they often provide non-trivial guarantees concerning solution quality. In this paper we briefly review dynamic programming and graph algorithms, and discuss representative examples of how these discrete optimization techniques have been applied to some classical vision problems. We focus on the low-level vision problem of stereo; the mid-level problem of interactive object segmentation; and the high-level problem of model-based recognition. PMID:20660950

  17. Collective Computation of Neural Network

    DTIC Science & Technology

    1990-03-15

    Sciences, Beijing ABSTRACT Computational neuroscience is a new branch of neuroscience originating from current research on the theory of computer...scientists working in artificial intelligence engineering and neuroscience . The paper introduces the collective computational properties of model neural...vision research. On this basis, the authors analyzed the significance of the Hopfield model. Key phrases: Computational Neuroscience , Neural Network, Model

  18. Multitask neurovision processor with extensive feedback and feedforward connections

    NASA Astrophysics Data System (ADS)

    Gupta, Madan M.; Knopf, George K.

    1991-11-01

    A multi-task neuro-vision parameter which performs a variety of information processing operations associated with the early stages of biological vision is presented. The network architecture of this neuro-vision processor, called the positive-negative (PN) neural processor, is loosely based on the neural activity fields exhibited by thalamic and cortical nervous tissue layers. The computational operation performed by the processor arises from the strength of the recurrent feedback among the numerous positive and negative neural computing units. By adjusting the feedback connections it is possible to generate diverse dynamic behavior that may be used for short-term visual memory (STVM), spatio-temporal filtering (STF), and pulse frequency modulation (PFM). The information attributes that are to be processes may be regulated by modifying the feedforward connections from the signal space to the neural processor.

  19. Optogenetics and computer vision for Caenorhabditis elegans neuroscience and other biophysical applications

    NASA Astrophysics Data System (ADS)

    Leifer, Andrew Michael

    2011-07-01

    This work presents optogenetics and real-time computer vision techniques to non-invasively manipulate and monitor neural activity with high spatiotemporal resolution in awake behaving Caenorhabditis elegans. These methods were employed to dissect the nematode's mechanosensory and motor circuits and to elucidate the neural control of wave propagation during forward locomotion. Additionally, similar computer vision methods were used to automatically detect and decode fluorescing DNA origami nanobarcodes, a new class of fluorescent reporter constructs. An optogenetic instrument capable of real-time light delivery with high spatiotemporal resolution to specified targets in freely moving C. elegans, the first such instrument of its kind, was developed. The instrument was used to probe the nematode's mechanosensory circuit, demonstrating that stimulation of a single mechanosensory neuron suffices to induce reversals. The instrument was also used to probe the motor circuit, demonstrating that inhibition of regions of cholinergic motor neurons blocks undulatory wave propagation and that muscle contractions can persist even without inputs from the motor neurons. The motor circuit was further probed using optogenetics and microfluidic techniques. Undulatory wave propagation during forward locomotion was observed to depend on stretch-sensitive signaling mediated by cholinergic motor neurons. Specifically, posterior body segments are compelled, through stretch-sensitive feedback, to bend in the same direction as anterior segments. This is the first explicit demonstration of such feedback and serves as a foundation for understanding motor circuits in other organisms. A real-time tracking system was developed to record intracellular calcium transients in single neurons while simultaneously monitoring macroscopic behavior of freely moving C. elegans. This was used to study the worm's stereotyped reversal behavior, the omega turn. Calcium transients corresponding to temporal features of the omega turn were observed in interneurons AVA and AVB. Optics and computer vision techniques similar to those developed for the C. elegans experiments were also used to detect DNA origami nanorod barcodes. An optimal Bayesian multiple hypothesis test was deployed to unambiguously classify each barcode as a member of one of 216 distinct barcode species. Overall, this set of experiments demonstrates the powerful role that optogenetics and computer vision can play in behavioral neuroscience and quantitative biophysics.

  20. Evaluating the Effects of Dimensionality in Advanced Avionic Display Concepts for Synthetic Vision Systems

    NASA Technical Reports Server (NTRS)

    Alexander, Amy L.; Prinzel, Lawrence J., III; Wickens, Christopher D.; Kramer, Lynda J.; Arthur, Jarvis J.; Bailey, Randall E.

    2007-01-01

    Synthetic vision systems provide an in-cockpit view of terrain and other hazards via a computer-generated display representation. Two experiments examined several display concepts for synthetic vision and evaluated how such displays modulate pilot performance. Experiment 1 (24 general aviation pilots) compared three navigational display (ND) concepts: 2D coplanar, 3D, and split-screen. Experiment 2 (12 commercial airline pilots) evaluated baseline 'blue sky/brown ground' or synthetic vision-enabled primary flight displays (PFDs) and three ND concepts: 2D coplanar with and without synthetic vision and a dynamic multi-mode rotatable exocentric format. In general, the results pointed to an overall advantage for a split-screen format, whether it be stand-alone (Experiment 1) or available via rotatable viewpoints (Experiment 2). Furthermore, Experiment 2 revealed benefits associated with utilizing synthetic vision in both the PFD and ND representations and the value of combined ego- and exocentric presentations.

  1. Integrated Imaging and Vision Techniques for Industrial Inspection: A Special Issue on Machine Vision and Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Zheng; Ukida, H.; Ramuhalli, Pradeep

    2010-06-05

    Imaging- and vision-based techniques play an important role in industrial inspection. The sophistication of the techniques assures high- quality performance of the manufacturing process through precise positioning, online monitoring, and real-time classification. Advanced systems incorporating multiple imaging and/or vision modalities provide robust solutions to complex situations and problems in industrial applications. A diverse range of industries, including aerospace, automotive, electronics, pharmaceutical, biomedical, semiconductor, and food/beverage, etc., have benefited from recent advances in multi-modal imaging, data fusion, and computer vision technologies. Many of the open problems in this context are in the general area of image analysis methodologies (preferably in anmore » automated fashion). This editorial article introduces a special issue of this journal highlighting recent advances and demonstrating the successful applications of integrated imaging and vision technologies in industrial inspection.« less

  2. People with Hemianopia Report Difficulty with TV, Computer, Cinema Use, and Photography.

    PubMed

    Costela, Francisco M; Sheldon, Sarah S; Walker, Bethany; Woods, Russell L

    2018-05-01

    Our survey found that participants with hemianopia report more difficulties watching video in various formats, including television (TV), on computers, and in a movie theater, compared with participants with normal vision (NV). These reported difficulties were not as marked as those reported by people with central vision loss. The aim of this study was to survey the viewing experience (e.g., frequency, difficulty) of viewing video on TV, computers and portable visual display devices, and at the cinema of people with hemianopia and NV. This information may guide vision rehabilitation. We administered a cross-sectional survey to investigate the viewing habits of people with hemianopia (n = 91) or NV (n = 192). The survey, consisting of 22 items, was administered either in person or in a telephone interview. Descriptive statistics are reported. There were five major differences between the hemianopia and NV groups. Many participants with hemianopia reported (1) at least "some" difficulty watching TV (39/82); (2) at least "some" difficulty watching video on a computer (16/62); (3) never attending the cinema (30/87); (4) at least some difficulty watching movies in the cinema (20/56), among those who did attend the cinema; and (5) never taking photographs (24/80). Some people with hemianopia reported methods that they used to help them watch video, including video playback and head turn. Although people with hemianopia report more difficulty with viewing video on TV and at the cinema, we are not aware of any rehabilitation methods specifically designed to assist people with hemianopia to watch video. The results of this survey may guide future vision rehabilitation.

  3. Employment after Vision Loss: Results of a Collective Case Study.

    ERIC Educational Resources Information Center

    Crudden, Adele

    2002-01-01

    A collective case study approach was used to examine factors that influence the job retention of persons with vision loss. Computer technology was found to be a major positive influence and print access and technology were a source of stress for most participants (n=10). (Contains 7 references.) (Author/CR)

  4. Cost-effectiveness analysis of routine pneumococcal vaccination in the UK: a comparison of the PHiD-CV vaccine and the PCV-13 vaccine using a Markov model.

    PubMed

    Delgleize, Emmanuelle; Leeuwenkamp, Oscar; Theodorou, Eleni; Van de Velde, Nicolas

    2016-11-30

    In 2010, the 13-valent pneumococcal conjugate vaccine (PCV-13) replaced the 7-valent vaccine (introduced in 2006) for vaccination against invasive pneumococcal diseases (IPDs), pneumonia and acute otitis media (AOM) in the UK. Using recent evidence on the impact of PCVs and epidemiological changes in the UK, we performed a cost-effectiveness analysis (CEA) to compare the pneumococcal non-typeable Haemophilus influenzae protein D conjugate vaccine (PHiD-CV) with PCV-13 in the ongoing national vaccination programme. CEA was based on a published Markov model. The base-case scenario accounted only for direct medical costs. Work days lost were considered in alternative scenarios. Calculations were based on serotype and disease-specific vaccine efficacies, serotype distributions and UK incidence rates and medical costs. Health benefits and costs related to IPD, pneumonia and AOM were accumulated over the lifetime of a UK birth cohort. Vaccination of infants at 2, 4 and 12 months with PHiD-CV or PCV-13, assuming complete coverage and adherence. The incremental cost-effectiveness ratio (ICER) was computed by dividing the difference in costs between the programmes by the difference in quality-adjusted life-years (QALY). Under our model assumptions, both vaccines had a similar impact on IPD and pneumonia, but PHiD-CV generated a greater reduction in AOM cases (161 918), AOM-related general practitioner consultations (31 070) and tympanostomy tube placements (2399). At price parity, PHiD-CV vaccination was dominant over PCV-13, saving 734 QALYs as well as £3.68 million to the National Health Service (NHS). At the lower list price of PHiD-CV, the cost-savings would increase to £45.77 million. This model projected that PHiD-CV would provide both incremental health benefits and cost-savings compared with PCV-13 at price parity. Using PHiD-CV could result in substantial budget savings to the NHS. These savings could be used to implement other life-saving interventions. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  5. Computer graphics testbed to simulate and test vision systems for space applications

    NASA Technical Reports Server (NTRS)

    Cheatham, John B.; Wu, Chris K.; Lin, Y. H.

    1991-01-01

    A system was developed for displaying computer graphics images of space objects and the use of the system was demonstrated as a testbed for evaluating vision systems for space applications. In order to evaluate vision systems, it is desirable to be able to control all factors involved in creating the images used for processing by the vision system. Considerable time and expense is involved in building accurate physical models of space objects. Also, precise location of the model relative to the viewer and accurate location of the light source require additional effort. As part of this project, graphics models of space objects such as the Solarmax satellite are created that the user can control the light direction and the relative position of the object and the viewer. The work is also aimed at providing control of hue, shading, noise and shadows for use in demonstrating and testing imaging processing techniques. The simulated camera data can provide XYZ coordinates, pitch, yaw, and roll for the models. A physical model is also being used to provide comparison of camera images with the graphics images.

  6. Prevalence by Computed Tomographic Angiography of Coronary Plaques in South Asian and White Patients With Type 2 Diabetes Mellitus at Low and High Risk Using Four Cardiovascular Risk Scores (UKPDS, FRS, ASCVD, and JBS3).

    PubMed

    Gobardhan, Sanjay N; Dimitriu-Leen, Aukelien C; van Rosendael, Alexander R; van Zwet, Erik W; Roos, Cornelis J; Oemrawsingh, Pranobe V; Kharagjitsingh, Aan V; Jukema, J Wouter; Delgado, Victoria; Schalij, Martin J; Bax, Jeroen J; Scholte, Arthur J H A

    2017-03-01

    The aim of this study was to explore the association between various cardiovascular (CV) risk scores and coronary atherosclerotic burden on coronary computed tomography angiography (CTA) in South Asians with type 2 diabetes mellitus and matched whites. Asymptomatic type 2 diabetic South Asians and whites were matched for age, gender, body mass index, hypertension, and hypercholesterolemia. Ten-year CV risk was estimated using different risk scores (United Kingdom Prospective Diabetes Study [UKPDS], Framingham Risk Score [FRS], AtheroSclerotic CardioVascular Disease [ASCVD], and Joint British Societies for the prevention of CVD [JBS3]) and categorized into low- and high-risk groups. The presence of coronary artery calcium (CAC) and obstructive coronary artery disease (CAD; ≥50% stenosis) was assessed using coronary CTA. Finally, the relation between coronary atherosclerosis on CTA and the low- and high-risk groups was compared. UKPDS, FRS, and ASCVD showed no differences in estimated CV risk between 159 South Asians and 159 matched whites. JBS3 showed a significant greater absolute CV risk in South Asians (18.4% vs 14.2%, p <0.01). Higher presence of CAC score >0 (69% vs 55%, p <0.05) and obstructive CAD (39% vs 27%, p <0.05) was observed in South Asians. South Asians categorized as high risk, using UKPDS, FRS, and ASCVD, showed more CAC and CAD compared than whites. JBS3 showed no differences. In conclusion, asymptomatic South Asians with type 2 diabetes mellitus more frequently showed CAC and obstructive CAD than matched whites in the population categorized as high-risk patients using UKPDS, FRS, and ASCVD as risk estimators. However, JBS3 seems to correlate best to CAC and CAD in both ethnicity groups compared with the other risk scores. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Big data computing: Building a vision for ARS information management

    USDA-ARS?s Scientific Manuscript database

    Improvements are needed within the ARS to increase scientific capacity and keep pace with new developments in computer technologies that support data acquisition and analysis. Enhancements in computing power and IT infrastructure are needed to provide scientists better access to high performance com...

  8. Notes from a clinical information system program manager. A solid vision makes all the difference.

    PubMed

    Staggers, N

    1997-01-01

    Today's CIS manager will create a vision that connects computerization in ambulatory, home and community-based care with increased responsibility for patients to assume self-care. Patients will be faced with a glut of information and they will need nursing help in determining the validity of information. The new vision in this environment will focus on integration, interoperability, and a new definition for patient-centered information. Creating a well-articulated vision is the first skill in the repertoire of a CIS manager's tool set. A vision provides the firm structure upon which the entire project can be built, and provides for links to life-cycle planning. This first step in project planning begins to bring order to the chaos of dynamic demands in clinical computing.

  9. Multisensory Mechanisms of Gaze Stabilization and Flight Control

    DTIC Science & Technology

    2008-12-17

    will give a brief summary of the scientific progress which, to a great extent, is covered by my previous report submitted in July 2008. I should...Engineers (cf. CV.) I was recently invited to give a presentation at a workshop organized by the Gatsby Unit for Computational Neuroscience

  10. Relationship of baseline HDL subclasses, small dense LDL and LDL triglyceride to cardiovascular events in the AIM-HIGH clinical trial.

    PubMed

    Albers, John J; Slee, April; Fleg, Jerome L; O'Brien, Kevin D; Marcovina, Santica M

    2016-08-01

    Previous results of the AIM-HIGH trial showed that baseline levels of the conventional lipid parameters were not predictive of future cardiovascular (CV) outcomes. The aims of this secondary analysis were to examine the levels of cholesterol in high density lipoprotein (HDL) subclasses (HDL2-C and HDL3-C), small dense low density lipoprotein (sdLDL-C), and LDL triglyceride (LDL-TG) at baseline, as well as the relationship between these levels and CV outcomes. Individuals with CV disease and low baseline HDL-C levels were randomized to simvastatin plus placebo or simvastatin plus extended release niacin (ERN), 1500 to 2000 mg/day, with ezetimibe added as needed in both groups to maintain an on-treatment LDL-C in the range of 40-80 mg/dL. The primary composite endpoint was death from coronary disease, nonfatal myocardial infarction, ischemic stroke, hospitalization for acute coronary syndrome, or symptom-driven coronary or cerebrovascular revascularization. HDL-C, HDL3-C, sdLDL-C and LDL-TG were measured at baseline by detergent-based homogeneous assays. HDL2-C was computed by the difference between HDL-C and HDL3-C. Analyses were performed on 3094 study participants who were already on statin therapy prior to enrollment in the trial. Independent contributions of lipoprotein fractions to CV events were determined by Cox proportional hazards modeling. Baseline HDL3-C was protective against CV events (HR: 0.84, p = 0.043) while HDL-C, HDL2-C, sdLDL-C and LDL-TG were not event-related (HR: 0.96, p = 0.369; HR: 1.07, p = 0.373; HR: 1.05, p = 0.492; HR: 1.03, p = 0.554, respectively). The results of this secondary analysis of the AIM-HIGH Study indicate that levels of HDL3-C, but not other lipoprotein fractions, are predictive of CV events, suggesting that the HDL3 subclass may be primarily responsible for the inverse association of HDL-C and CV disease. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. Relationship of baseline HDL subclasses, small dense LDL and LDL triglyceride to cardiovascular events in the AIM-HIGH clinical trial

    PubMed Central

    Albers, John J; Slee, April; Fleg, Jerome L; O’Brien, Kevin D; Marcovina, Santica M

    2016-01-01

    Background and aims Previous results of the AIM-HIGH trial showed that baseline levels of the conventional lipid parameters were not predictive of future cardiovascular (CV) outcomes. The aims of this secondary analysis were to examine the levels of cholesterol in high density lipoprotein (HDL) subclasses (HDL2-C and HDL3-C), small dense low density lipoprotein (sdLDL-C), and LDL triglyceride (LDL-TG) at baseline, as well as the relationship between these levels and CV outcomes. Methods Individuals with CV disease and low baseline HDL-C levels were randomized to simvastatin plus placebo or simvastatin plus extended release niacin (ERN), 1,500 to 2,000 mg/day, with ezetimibe added as needed in both groups to maintain an on-treatment LDL-C in the range of 40 to 80 mg/dL. The primary composite endpoint was death from coronary disease, nonfatal myocardial infarction, ischemic stroke, hospitalization for acute coronary syndrome, or symptom-driven coronary or cerebrovascular revascularization. HDL-C, HDL3-C, sdLDL-C and LDL-TG were measured at baseline by detergent-based homogeneous assays. HDL2-C was computed by the difference between HDL-C and HDL3-C. Analyses were performed on 3,094 study participants who were already on statin therapy prior to enrollment in the trial. Independent contributions of lipoprotein fractions to CV events were determined by Cox proportional hazards modeling. Results Baseline HDL3-C was protective against CV events (HR: 0.84, p=0.043) while HDL-C, HDL2-C, sdLDL-C and LDL-TG were not event-related (HR: 0.96, p=0.369; HR: 1.07, p=0.373; HR: 1.05, p=0.492; HR: 1.03, p=0.554, respectively). Conclusions The results of this secondary analysis of the AIM-HIGH Study indicate that levels of HDL3-C, but not other lipoprotein fractions, are predictive of CV events, suggesting that the HDL3 subclass may be primarily responsible for the inverse association of HDL-C and CV disease. PMID:27320173

  12. Method and apparatus for predicting the direction of movement in machine vision

    NASA Technical Reports Server (NTRS)

    Lawton, Teri B. (Inventor)

    1992-01-01

    A computer-simulated cortical network is presented. The network is capable of computing the visibility of shifts in the direction of movement. Additionally, the network can compute the following: (1) the magnitude of the position difference between the test and background patterns; (2) localized contrast differences at different spatial scales analyzed by computing temporal gradients of the difference and sum of the outputs of paired even- and odd-symmetric bandpass filters convolved with the input pattern; and (3) the direction of a test pattern moved relative to a textured background. The direction of movement of an object in the field of view of a robotic vision system is detected in accordance with nonlinear Gabor function algorithms. The movement of objects relative to their background is used to infer the 3-dimensional structure and motion of object surfaces.

  13. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems

    PubMed Central

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-01

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems. PMID:28079187

  14. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems.

    PubMed

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-12

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.

  15. Knowledge-based vision and simple visual machines.

    PubMed Central

    Cliff, D; Noble, J

    1997-01-01

    The vast majority of work in machine vision emphasizes the representation of perceived objects and events: it is these internal representations that incorporate the 'knowledge' in knowledge-based vision or form the 'models' in model-based vision. In this paper, we discuss simple machine vision systems developed by artificial evolution rather than traditional engineering design techniques, and note that the task of identifying internal representations within such systems is made difficult by the lack of an operational definition of representation at the causal mechanistic level. Consequently, we question the nature and indeed the existence of representations posited to be used within natural vision systems (i.e. animals). We conclude that representations argued for on a priori grounds by external observers of a particular vision system may well be illusory, and are at best place-holders for yet-to-be-identified causal mechanistic interactions. That is, applying the knowledge-based vision approach in the understanding of evolved systems (machines or animals) may well lead to theories and models that are internally consistent, computationally plausible, and entirely wrong. PMID:9304684

  16. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems

    NASA Astrophysics Data System (ADS)

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-01

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.

  17. Color vision testing with a computer graphics system: preliminary results.

    PubMed

    Arden, G; Gündüz, K; Perry, S

    1988-06-01

    We report a method for computer enhancement of color vision tests. In our graphics system 256 colors are selected from a much larger range and displayed on a screen divided into 768 x 288 pixels. Eight-bit digital-to-analogue converters drive a high quality monitor with separate inputs to the red, green, and blue amplifiers and calibrated gun chromaticities. The graphics are controlled by a PASCAL program written for a personal computer, which calculates the values of the red, green, and blue signals and specifies them in Commité Internationale d'Eclairage X, Y, and Z fundamentals, so changes in chrominance occur without changes in luminance. The system for measuring color contrast thresholds with gratings is more than adequate in normal observers. In patients with mild retinal damage in whom other tests of visual function are normal, this method of testing color vision shows specific increases in contrast thresholds along tritan color-confusion lines. By the time the Hardy-Rand-Rittler and Farnsworth-Munsell 100-hue tests disclose abnormalities, gross defects in color contrast threshold can be seen with our system.

  18. Flight data acquisition methodology for validation of passive ranging algorithms for obstacle avoidance

    NASA Technical Reports Server (NTRS)

    Smith, Phillip N.

    1990-01-01

    The automation of low-altitude rotorcraft flight depends on the ability to detect, locate, and navigate around obstacles lying in the rotorcraft's intended flightpath. Computer vision techniques provide a passive method of obstacle detection and range estimation, for obstacle avoidance. Several algorithms based on computer vision methods have been developed for this purpose using laboratory data; however, further development and validation of candidate algorithms require data collected from rotorcraft flight. A data base containing low-altitude imagery augmented with the rotorcraft and sensor parameters required for passive range estimation is not readily available. Here, the emphasis is on the methodology used to develop such a data base from flight-test data consisting of imagery, rotorcraft and sensor parameters, and ground-truth range measurements. As part of the data preparation, a technique for obtaining the sensor calibration parameters is described. The data base will enable the further development of algorithms for computer vision-based obstacle detection and passive range estimation, as well as provide a benchmark for verification of range estimates against ground-truth measurements.

  19. Container-code recognition system based on computer vision and deep neural networks

    NASA Astrophysics Data System (ADS)

    Liu, Yi; Li, Tianjian; Jiang, Li; Liang, Xiaoyao

    2018-04-01

    Automatic container-code recognition system becomes a crucial requirement for ship transportation industry in recent years. In this paper, an automatic container-code recognition system based on computer vision and deep neural networks is proposed. The system consists of two modules, detection module and recognition module. The detection module applies both algorithms based on computer vision and neural networks, and generates a better detection result through combination to avoid the drawbacks of the two methods. The combined detection results are also collected for online training of the neural networks. The recognition module exploits both character segmentation and end-to-end recognition, and outputs the recognition result which passes the verification. When the recognition module generates false recognition, the result will be corrected and collected for online training of the end-to-end recognition sub-module. By combining several algorithms, the system is able to deal with more situations, and the online training mechanism can improve the performance of the neural networks at runtime. The proposed system is able to achieve 93% of overall recognition accuracy.

  20. Computer Vision-Based Structural Displacement Measurement Robust to Light-Induced Image Degradation for In-Service Bridges

    PubMed Central

    Lee, Junhwa; Lee, Kyoung-Chan; Cho, Soojin

    2017-01-01

    The displacement responses of a civil engineering structure can provide important information regarding structural behaviors that help in assessing safety and serviceability. A displacement measurement using conventional devices, such as the linear variable differential transformer (LVDT), is challenging owing to issues related to inconvenient sensor installation that often requires additional temporary structures. A promising alternative is offered by computer vision, which typically provides a low-cost and non-contact displacement measurement that converts the movement of an object, mostly an attached marker, in the captured images into structural displacement. However, there is limited research on addressing light-induced measurement error caused by the inevitable sunlight in field-testing conditions. This study presents a computer vision-based displacement measurement approach tailored to a field-testing environment with enhanced robustness to strong sunlight. An image-processing algorithm with an adaptive region-of-interest (ROI) is proposed to reliably determine a marker’s location even when the marker is indistinct due to unfavorable light. The performance of the proposed system is experimentally validated in both laboratory-scale and field experiments. PMID:29019950

  1. Application of machine vision to pup loaf bread evaluation

    NASA Astrophysics Data System (ADS)

    Zayas, Inna Y.; Chung, O. K.

    1996-12-01

    Intrinsic end-use quality of hard winter wheat breeding lines is routinely evaluated at the USDA, ARS, USGMRL, Hard Winter Wheat Quality Laboratory. Experimental baking test of pup loaves is the ultimate test for evaluating hard wheat quality. Computer vision was applied to developing an objective methodology for bread quality evaluation for the 1994 and 1995 crop wheat breeding line samples. Computer extracted features for bread crumb grain were studied, using subimages (32 by 32 pixel) and features computed for the slices with different threshold settings. A subsampling grid was located with respect to the axis of symmetry of a slice to provide identical topological subimage information. Different ranking techniques were applied to the databases. Statistical analysis was run on the database with digital image and breadmaking features. Several ranking algorithms and data visualization techniques were employed to create a sensitive scale for porosity patterns of bread crumb. There were significant linear correlations between machine vision extracted features and breadmaking parameters. Crumb grain scores by human experts were correlated more highly with some image features than with breadmaking parameters.

  2. Fast and robust generation of feature maps for region-based visual attention.

    PubMed

    Aziz, Muhammad Zaheer; Mertsching, Bärbel

    2008-05-01

    Visual attention is one of the important phenomena in biological vision which can be followed to achieve more efficiency, intelligence, and robustness in artificial vision systems. This paper investigates a region-based approach that performs pixel clustering prior to the processes of attention in contrast to late clustering as done by contemporary methods. The foundation steps of feature map construction for the region-based attention model are proposed here. The color contrast map is generated based upon the extended findings from the color theory, the symmetry map is constructed using a novel scanning-based method, and a new algorithm is proposed to compute a size contrast map as a formal feature channel. Eccentricity and orientation are computed using the moments of obtained regions and then saliency is evaluated using the rarity criteria. The efficient design of the proposed algorithms allows incorporating five feature channels while maintaining a processing rate of multiple frames per second. Another salient advantage over the existing techniques is the reusability of the salient regions in the high-level machine vision procedures due to preservation of their shapes and precise locations. The results indicate that the proposed model has the potential to efficiently integrate the phenomenon of attention into the main stream of machine vision and systems with restricted computing resources such as mobile robots can benefit from its advantages.

  3. NETRA: A parallel architecture for integrated vision systems. 1: Architecture and organization

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok N.; Patel, Janak H.; Ahuja, Narendra

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is considered to be a system that uses vision algorithms from all levels of processing for a high level application (such as object recognition). A model of computation is presented for parallel processing for an IVS. Using the model, desired features and capabilities of a parallel architecture suitable for IVSs are derived. Then a multiprocessor architecture (called NETRA) is presented. This architecture is highly flexible without the use of complex interconnection schemes. The topology of NETRA is recursively defined and hence is easily scalable from small to large systems. Homogeneity of NETRA permits fault tolerance and graceful degradation under faults. It is a recursively defined tree-type hierarchical architecture where each of the leaf nodes consists of a cluster of processors connected with a programmable crossbar with selective broadcast capability to provide for desired flexibility. A qualitative evaluation of NETRA is presented. Then general schemes are described to map parallel algorithms onto NETRA. Algorithms are classified according to their communication requirements for parallel processing. An extensive analysis of inter-cluster communication strategies in NETRA is presented, and parameters affecting performance of parallel algorithms when mapped on NETRA are discussed. Finally, a methodology to evaluate performance of algorithms on NETRA is described.

  4. Visual Turing test for computer vision systems

    PubMed Central

    Geman, Donald; Geman, Stuart; Hallonquist, Neil; Younes, Laurent

    2015-01-01

    Today, computer vision systems are tested by their accuracy in detecting and localizing instances of objects. As an alternative, and motivated by the ability of humans to provide far richer descriptions and even tell a story about an image, we construct a “visual Turing test”: an operator-assisted device that produces a stochastic sequence of binary questions from a given test image. The query engine proposes a question; the operator either provides the correct answer or rejects the question as ambiguous; the engine proposes the next question (“just-in-time truthing”). The test is then administered to the computer-vision system, one question at a time. After the system’s answer is recorded, the system is provided the correct answer and the next question. Parsing is trivial and deterministic; the system being tested requires no natural language processing. The query engine employs statistical constraints, learned from a training set, to produce questions with essentially unpredictable answers—the answer to a question, given the history of questions and their correct answers, is nearly equally likely to be positive or negative. In this sense, the test is only about vision. The system is designed to produce streams of questions that follow natural story lines, from the instantiation of a unique object, through an exploration of its properties, and on to its relationships with other uniquely instantiated objects. PMID:25755262

  5. Clinical efficacy of Ayurvedic management in computer vision syndrome: A pilot study.

    PubMed

    Dhiman, Kartar Singh; Ahuja, Deepak Kumar; Sharma, Sanjeev Kumar

    2012-07-01

    Improper use of sense organs, violating the moral code of conduct, and the effect of the time are the three basic causative factors behind all the health problems. Computer, the knowledge bank of modern life, has emerged as a profession causing vision-related discomfort, ocular fatigue, and systemic effects. Computer Vision Syndrome (CVS) is the new nomenclature to the visual, ocular, and systemic symptoms arising due to the long time and improper working on the computer and is emerging as a pandemic in the 21(st) century. On critical analysis of the symptoms of CVS on Tridoshika theory of Ayurveda, as per the road map given by Acharya Charaka, it seems to be a Vata-Pittaja ocular cum systemic disease which needs systemic as well as topical treatment approach. Shatavaryaadi Churna (orally), Go-Ghrita Netra Tarpana (topically), and counseling regarding proper working conditions on computer were tried in 30 patients of CVS. In group I, where oral and local treatment was given, significant improvement in all the symptoms of CVS was observed, whereas in groups II and III, local treatment and counseling regarding proper working conditions, respectively, were given and showed insignificant results. The study verified the hypothesis that CVS in Ayurvedic perspective is a Vata-Pittaja disease affecting mainly eyes and body as a whole and needs a systemic intervention rather than topical ocular medication only.

  6. Clinical efficacy of Ayurvedic management in computer vision syndrome: A pilot study

    PubMed Central

    Dhiman, Kartar Singh; Ahuja, Deepak Kumar; Sharma, Sanjeev Kumar

    2012-01-01

    Improper use of sense organs, violating the moral code of conduct, and the effect of the time are the three basic causative factors behind all the health problems. Computer, the knowledge bank of modern life, has emerged as a profession causing vision-related discomfort, ocular fatigue, and systemic effects. Computer Vision Syndrome (CVS) is the new nomenclature to the visual, ocular, and systemic symptoms arising due to the long time and improper working on the computer and is emerging as a pandemic in the 21st century. On critical analysis of the symptoms of CVS on Tridoshika theory of Ayurveda, as per the road map given by Acharya Charaka, it seems to be a Vata–Pittaja ocular cum systemic disease which needs systemic as well as topical treatment approach. Shatavaryaadi Churna (orally), Go-Ghrita Netra Tarpana (topically), and counseling regarding proper working conditions on computer were tried in 30 patients of CVS. In group I, where oral and local treatment was given, significant improvement in all the symptoms of CVS was observed, whereas in groups II and III, local treatment and counseling regarding proper working conditions, respectively, were given and showed insignificant results. The study verified the hypothesis that CVS in Ayurvedic perspective is a Vata–Pittaja disease affecting mainly eyes and body as a whole and needs a systemic intervention rather than topical ocular medication only. PMID:23723647

  7. Computer vision syndrome and ergonomic practices among undergraduate university students.

    PubMed

    Mowatt, Lizette; Gordon, Carron; Santosh, Arvind Babu Rajendra; Jones, Thaon

    2018-01-01

    To determine the prevalence of computer vision syndrome (CVS) and ergonomic practices among students in the Faculty of Medical Sciences at The University of the West Indies (UWI), Jamaica. A cross-sectional study was done with a self-administered questionnaire. Four hundred and nine students participated; 78% were females. The mean age was 21.6 years. Neck pain (75.1%), eye strain (67%), shoulder pain (65.5%) and eye burn (61.9%) were the most common CVS symptoms. Dry eyes (26.2%), double vision (28.9%) and blurred vision (51.6%) were the least commonly experienced symptoms. Eye burning (P = .001), eye strain (P = .041) and neck pain (P = .023) were significantly related to level of viewing. Moderate eye burning (55.1%) and double vision (56%) occurred in those who used handheld devices (P = .001 and .007, respectively). Moderate blurred vision was reported in 52% who looked down at the device compared with 14.8% who held it at an angle. Severe eye strain occurred in 63% of those who looked down at a device compared with 21% who kept the device at eye level. Shoulder pain was not related to pattern of use. Ocular symptoms and neck pain were less likely if the device was held just below eye level. There is a high prevalence of Symptoms of CVS amongst university students which could be reduced, in particular neck pain and eye strain and burning, with improved ergonomic practices. © 2017 John Wiley & Sons Ltd.

  8. A FPGA-based architecture for real-time image matching

    NASA Astrophysics Data System (ADS)

    Wang, Jianhui; Zhong, Sheng; Xu, Wenhui; Zhang, Weijun; Cao, Zhiguo

    2013-10-01

    Image matching is a fundamental task in computer vision. It is used to establish correspondence between two images taken at different viewpoint or different time from the same scene. However, its large computational complexity has been a challenge to most embedded systems. This paper proposes a single FPGA-based image matching system, which consists of SIFT feature detection, BRIEF descriptor extraction and BRIEF matching. It optimizes the FPGA architecture for the SIFT feature detection to reduce the FPGA resources utilization. Moreover, we implement BRIEF description and matching on FPGA also. The proposed system can implement image matching at 30fps (frame per second) for 1280x720 images. Its processing speed can meet the demand of most real-life computer vision applications.

  9. Computational thinking and thinking about computing

    PubMed Central

    Wing, Jeannette M.

    2008-01-01

    Computational thinking will influence everyone in every field of endeavour. This vision poses a new educational challenge for our society, especially for our children. In thinking about computing, we need to be attuned to the three drivers of our field: science, technology and society. Accelerating technological advances and monumental societal demands force us to revisit the most basic scientific questions of computing. PMID:18672462

  10. Testing meta tagger

    DTIC Science & Technology

    2017-12-21

    rank , and computer vision. Machine learning is closely related to (and often overlaps with) computational statistics, which also focuses on...Machine learning is a field of computer science that gives computers the ability to learn without being explicitly programmed.[1] Arthur Samuel...an American pioneer in the field of computer gaming and artificial intelligence, coined the term "Machine Learning " in 1959 while at IBM[2]. Evolved

  11. Use of Chronic Kidney Disease to Enhance Prediction of Cardiovascular Risk in Those at Medium Risk.

    PubMed

    Chia, Yook Chin; Lim, Hooi Min; Ching, Siew Mooi

    2015-01-01

    Based on global cardiovascular (CV) risk assessment for example using the Framingham risk score, it is recommended that those with high risk should be treated and those with low risk should not be treated. The recommendation for those of medium risk is less clear and uncertain. We aimed to determine whether factoring in chronic kidney disease (CKD) will improve CV risk prediction in those with medium risk. This is a 10-year retrospective cohort study of 905 subjects in a primary care clinic setting. Baseline CV risk profile and serum creatinine in 1998 were captured from patients record. Framingham general cardiovascular disease risk score (FRS) for each patient was computed. All cardiovascular disease (CVD) events from 1998-2007 were captured. Overall, patients with CKD had higher FRS risk score (25.9% vs 20%, p = 0.001) and more CVD events (22.3% vs 11.9%, p = 0.002) over a 10-year period compared to patients without CKD. In patients with medium CV risk, there was no significant difference in the FRS score among those with and without CKD (14.4% vs 14.6%, p = 0.84) However, in this same medium risk group, patients with CKD had more CV events compared to those without CKD (26.7% vs 6.6%, p = 0.005). This is in contrast to patients in the low and high risk group where there was no difference in CVD events whether these patients had or did not have CKD. There were more CV events in the Framingham medium risk group when they also had CKD compared those in the same risk group without CKD. Hence factoring in CKD for those with medium risk helps to further stratify and identify those who are actually at greater risk, when treatment may be more likely to be indicated.

  12. Use of Chronic Kidney Disease to Enhance Prediction of Cardiovascular Risk in Those at Medium Risk

    PubMed Central

    Chia, Yook Chin; Lim, Hooi Min; Ching, Siew Mooi

    2015-01-01

    Based on global cardiovascular (CV) risk assessment for example using the Framingham risk score, it is recommended that those with high risk should be treated and those with low risk should not be treated. The recommendation for those of medium risk is less clear and uncertain. We aimed to determine whether factoring in chronic kidney disease (CKD) will improve CV risk prediction in those with medium risk. This is a 10-year retrospective cohort study of 905 subjects in a primary care clinic setting. Baseline CV risk profile and serum creatinine in 1998 were captured from patients record. Framingham general cardiovascular disease risk score (FRS) for each patient was computed. All cardiovascular disease (CVD) events from 1998–2007 were captured. Overall, patients with CKD had higher FRS risk score (25.9% vs 20%, p = 0.001) and more CVD events (22.3% vs 11.9%, p = 0.002) over a 10-year period compared to patients without CKD. In patients with medium CV risk, there was no significant difference in the FRS score among those with and without CKD (14.4% vs 14.6%, p = 0.84) However, in this same medium risk group, patients with CKD had more CV events compared to those without CKD (26.7% vs 6.6%, p = 0.005). This is in contrast to patients in the low and high risk group where there was no difference in CVD events whether these patients had or did not have CKD. There were more CV events in the Framingham medium risk group when they also had CKD compared those in the same risk group without CKD. Hence factoring in CKD for those with medium risk helps to further stratify and identify those who are actually at greater risk, when treatment may be more likely to be indicated. PMID:26496190

  13. The Next Generation of Personal Computers.

    ERIC Educational Resources Information Center

    Crecine, John P.

    1986-01-01

    Discusses factors converging to create high-capacity, low-cost nature of next generation of microcomputers: a coherent vision of what graphics workstation and future computing environment should be like; hardware developments leading to greater storage capacity at lower costs; and development of software and expertise to exploit computing power…

  14. A trunk ranging system based on binocular stereo vision

    NASA Astrophysics Data System (ADS)

    Zhao, Xixuan; Kan, Jiangming

    2017-07-01

    Trunk ranging is an essential function for autonomous forestry robots. Traditional trunk ranging systems based on personal computers are not convenient in practical application. This paper examines the implementation of a trunk ranging system based on the binocular vision theory via TI's DaVinc DM37x system. The system is smaller and more reliable than that implemented using a personal computer. It calculates the three-dimensional information from the images acquired by binocular cameras, producing the targeting and ranging results. The experimental results show that the measurement error is small and the system design is feasible for autonomous forestry robots.

  15. Using Computer Vision Techniques to Locate Objects in an Image

    DTIC Science & Technology

    1988-09-01

    Sujata Kakarla J. Wakeley A. S. Maida Snf DTIC SL7CTE0 ;r’!•,,/ )N ATMT~~c.N T" A TICIINICAL REPORT " SR 10 •: 1"R! _ IrIi) The Pennsylvania State...University APPLIED RESEARCH LABORATORY P. 0. Box 30 State College, PA 16804 USING COMPUTER VISION TECHNIQUES TO LOCATE OBJECTS IN AN IMAGE by Sujata Kakarla J...in an Image 12 PERSONAL AUTHOR(S) Sujata Kakarla, J. Wakelev, A. S. Maida 𔃽a TYPE OF REPORT 13b TIME COVERED 14 DATE OF REPORT (Y ar, Month, Day) 5

  16. The use of computer vision in an intelligent environment to support aging-in-place, safety, and independence in the home.

    PubMed

    Mihailidis, Alex; Carmichael, Brent; Boger, Jennifer

    2004-09-01

    This paper discusses the use of computer vision in pervasive healthcare systems, specifically in the design of a sensing agent for an intelligent environment that assists older adults with dementia during an activity of daily living. An overview of the techniques applied in this particular example is provided, along with results from preliminary trials completed using the new sensing agent. A discussion of the results obtained to date is presented, including technical and social issues that remain for the advancement and acceptance of this type of technology within pervasive healthcare.

  17. Capsule endoscope localization based on computer vision technique.

    PubMed

    Liu, Li; Hu, Chao; Cai, Wentao; Meng, Max Q H

    2009-01-01

    To build a new type of wireless capsule endoscope with interactive gastrointestinal tract examination, a localization and orientation system is needed for tracking 3D location and 3D orientation of the capsule movement. The magnetic localization and orientation method produces only 5 DOF, but misses the information of rotation angle along capsule's main axis. In this paper, we presented a complementary orientation approach for the capsule endoscope, and the 3D rotation can be determined by applying computer vision technique on the captured endoscopic images. The experimental results show that the complementary orientation method has good accuracy and high feasibility.

  18. Local spatio-temporal analysis in vision systems

    NASA Astrophysics Data System (ADS)

    Geisler, Wilson S.; Bovik, Alan; Cormack, Lawrence; Ghosh, Joydeep; Gildeen, David

    1994-07-01

    The aims of this project are the following: (1) develop a physiologically and psychophysically based model of low-level human visual processing (a key component of which are local frequency coding mechanisms); (2) develop image models and image-processing methods based upon local frequency coding; (3) develop algorithms for performing certain complex visual tasks based upon local frequency representations, (4) develop models of human performance in certain complex tasks based upon our understanding of low-level processing; and (5) develop a computational testbed for implementing, evaluating and visualizing the proposed models and algorithms, using a massively parallel computer. Progress has been substantial on all aims. The highlights include the following: (1) completion of a number of psychophysical and physiological experiments revealing new, systematic and exciting properties of the primate (human and monkey) visual system; (2) further development of image models that can accurately represent the local frequency structure in complex images; (3) near completion in the construction of the Texas Active Vision Testbed; (4) development and testing of several new computer vision algorithms dealing with shape-from-texture, shape-from-stereo, and depth-from-focus; (5) implementation and evaluation of several new models of human visual performance; and (6) evaluation, purchase and installation of a MasPar parallel computer.

  19. Identification of fidgety movements and prediction of CP by the use of computer-based video analysis is more accurate when based on two video recordings.

    PubMed

    Adde, Lars; Helbostad, Jorunn; Jensenius, Alexander R; Langaas, Mette; Støen, Ragnhild

    2013-08-01

    This study evaluates the role of postterm age at assessment and the use of one or two video recordings for the detection of fidgety movements (FMs) and prediction of cerebral palsy (CP) using computer vision software. Recordings between 9 and 17 weeks postterm age from 52 preterm and term infants (24 boys, 28 girls; 26 born preterm) were used. Recordings were analyzed using computer vision software. Movement variables, derived from differences between subsequent video frames, were used for quantitative analysis. Sensitivities, specificities, and area under curve were estimated for the first and second recording, or a mean of both. FMs were classified based on the Prechtl approach of general movement assessment. CP status was reported at 2 years. Nine children developed CP of whom all recordings had absent FMs. The mean variability of the centroid of motion (CSD) from two recordings was more accurate than using only one recording, and identified all children who were diagnosed with CP at 2 years. Age at assessment did not influence the detection of FMs or prediction of CP. The accuracy of computer vision techniques in identifying FMs and predicting CP based on two recordings should be confirmed in future studies.

  20. Machine vision methods for use in grain variety discrimination and quality analysis

    NASA Astrophysics Data System (ADS)

    Winter, Philip W.; Sokhansanj, Shahab; Wood, Hugh C.

    1996-12-01

    Decreasing cost of computer technology has made it feasible to incorporate machine vision technology into the agriculture industry. The biggest attraction to using a machine vision system is the computer's ability to be completely consistent and objective. One use is in the variety discrimination and quality inspection of grains. Algorithms have been developed using Fourier descriptors and neural networks for use in variety discrimination of barley seeds. RGB and morphology features have been used in the quality analysis of lentils, and probability distribution functions and L,a,b color values for borage dockage testing. These methods have been shown to be very accurate and have a high potential for agriculture. This paper presents the techniques used and results obtained from projects including: a lentil quality discriminator, a barley variety classifier, a borage dockage tester, a popcorn quality analyzer, and a pistachio nut grading system.

  1. FPGA-Based Multimodal Embedded Sensor System Integrating Low- and Mid-Level Vision

    PubMed Central

    Botella, Guillermo; Martín H., José Antonio; Santos, Matilde; Meyer-Baese, Uwe

    2011-01-01

    Motion estimation is a low-level vision task that is especially relevant due to its wide range of applications in the real world. Many of the best motion estimation algorithms include some of the features that are found in mammalians, which would demand huge computational resources and therefore are not usually available in real-time. In this paper we present a novel bioinspired sensor based on the synergy between optical flow and orthogonal variant moments. The bioinspired sensor has been designed for Very Large Scale Integration (VLSI) using properties of the mammalian cortical motion pathway. This sensor combines low-level primitives (optical flow and image moments) in order to produce a mid-level vision abstraction layer. The results are described trough experiments showing the validity of the proposed system and an analysis of the computational resources and performance of the applied algorithms. PMID:22164069

  2. SailSpy: a vision system for yacht sail shape measurement

    NASA Astrophysics Data System (ADS)

    Olsson, Olof J.; Power, P. Wayne; Bowman, Chris C.; Palmer, G. Terry; Clist, Roger S.

    1992-11-01

    SailSpy is a real-time vision system which we have developed for automatically measuring sail shapes and masthead rotation on racing yachts. Versions have been used by the New Zealand team in two America's Cup challenges in 1988 and 1992. SailSpy uses four miniature video cameras mounted at the top of the mast to provide views of the headsail and mainsail on either tack. The cameras are connected to the SailSpy computer below deck using lightweight cables mounted inside the mast. Images received from the cameras are automatically analyzed by the SailSpy computer, and sail shape and mast rotation parameters are calculated. The sail shape parameters are calculated by recognizing sail markers (ellipses) that have been attached to the sails, and the mast rotation parameters by recognizing deck markers painted on the deck. This paper describes the SailSpy system and some of the vision algorithms used.

  3. Applications of wavelets in interferometry and artificial vision

    NASA Astrophysics Data System (ADS)

    Escalona Z., Rafael A.

    2001-08-01

    In this paper we present a different point of view of phase measurements performed in interferometry, image processing and intelligent vision using Wavelet Transform. In standard and white-light interferometry, the phase function is retrieved by using phase-shifting, Fourier-Transform, cosinus-inversion and other known algorithms. Our novel technique presented here is faster, robust and shows excellent accuracy in phase determinations. Finally, in our second application, fringes are no more generate by some light interaction but result from the observation of adapted strip set patterns directly printed on the target of interest. The moving target is simply observed by a conventional vision system and usual phase computation algorithms are adapted to an image processing by wavelet transform, in order to sense target position and displacements with a high accuracy. In general, we have determined that wavelet transform presents properties of robustness, relative speed of calculus and very high accuracy in phase computations.

  4. FPGA-based multimodal embedded sensor system integrating low- and mid-level vision.

    PubMed

    Botella, Guillermo; Martín H, José Antonio; Santos, Matilde; Meyer-Baese, Uwe

    2011-01-01

    Motion estimation is a low-level vision task that is especially relevant due to its wide range of applications in the real world. Many of the best motion estimation algorithms include some of the features that are found in mammalians, which would demand huge computational resources and therefore are not usually available in real-time. In this paper we present a novel bioinspired sensor based on the synergy between optical flow and orthogonal variant moments. The bioinspired sensor has been designed for Very Large Scale Integration (VLSI) using properties of the mammalian cortical motion pathway. This sensor combines low-level primitives (optical flow and image moments) in order to produce a mid-level vision abstraction layer. The results are described trough experiments showing the validity of the proposed system and an analysis of the computational resources and performance of the applied algorithms.

  5. Note on the coefficient of variations of neuronal spike trains.

    PubMed

    Lengler, Johannes; Steger, Angelika

    2017-08-01

    It is known that many neurons in the brain show spike trains with a coefficient of variation (CV) of the interspike times of approximately 1, thus resembling the properties of Poisson spike trains. Computational studies have been able to reproduce this phenomenon. However, the underlying models were too complex to be examined analytically. In this paper, we offer a simple model that shows the same effect but is accessible to an analytic treatment. The model is a random walk model with a reflecting barrier; we give explicit formulas for the CV in the regime of excess inhibition. We also analyze the effect of probabilistic synapses in our model and show that it resembles previous findings that were obtained by simulation.

  6. a Fast Segmentation Algorithm for C-V Model Based on Exponential Image Sequence Generation

    NASA Astrophysics Data System (ADS)

    Hu, J.; Lu, L.; Xu, J.; Zhang, J.

    2017-09-01

    For the island coastline segmentation, a fast segmentation algorithm for C-V model method based on exponential image sequence generation is proposed in this paper. The exponential multi-scale C-V model with level set inheritance and boundary inheritance is developed. The main research contributions are as follows: 1) the problems of the "holes" and "gaps" are solved when extraction coastline through the small scale shrinkage, low-pass filtering and area sorting of region. 2) the initial value of SDF (Signal Distance Function) and the level set are given by Otsu segmentation based on the difference of reflection SAR on land and sea, which are finely close to the coastline. 3) the computational complexity of continuous transition are successfully reduced between the different scales by the SDF and of level set inheritance. Experiment results show that the method accelerates the acquisition of initial level set formation, shortens the time of the extraction of coastline, at the same time, removes the non-coastline body part and improves the identification precision of the main body coastline, which automates the process of coastline segmentation.

  7. Keeping Student Performance Central: The New York Assessment Collection. Studies on Exhibitions.

    ERIC Educational Resources Information Center

    Allen, David; McDonald, Joseph

    This report describes a computer tool used by the state of New York to assess student performance in elementary and secondary grades. Based on the premise that every assessment is a system of interacting elements, the tool examines students on six dimensions: vision, prompt, coaching context, performance, standards, and reflection. Vision, which…

  8. Future Automated Rough Mills Hinge on Vision Systems

    Treesearch

    Philip A. Araman

    1996-01-01

    The backbone behind major changes to present and future rough mills in dimension, furniture, cabinet or millwork facilities will be computer vision systems. Because of the wide variety of products and the quality of parts produced, the scanning systems and rough mills will vary greatly. The scanners will vary in type. For many complicated applications, multiple scanner...

  9. A Multiple Sensor Machine Vision System for Automatic Hardwood Feature Detection

    Treesearch

    D. Earl Kline; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman; Robert L. Brisbin

    1993-01-01

    A multiple sensor machine vision prototype is being developed to scan full size hardwood lumber at industrial speeds for automatically detecting features such as knots holes, wane, stain, splits, checks, and color. The prototype integrates a multiple sensor imaging system, a materials handling system, a computer system, and application software. The prototype provides...

  10. A Computer-Based System Integrating Instruction and Information Retrieval: A Description of Some Methodological Considerations.

    ERIC Educational Resources Information Center

    Selig, Judith A.; And Others

    This report, summarizing the activities of the Vision Information Center (VIC) in the field of computer-assisted instruction from December, 1966 to August, 1967, describes the methodology used to load a large body of information--a programed text on basic opthalmology--onto a computer for subsequent information retrieval and computer-assisted…

  11. NASA/ESA CV-990 Spacelab simulation. Appendixes: C, data-handling: Planning and implementation; D, communications; E, mission documentation

    NASA Technical Reports Server (NTRS)

    Reller, J. O., Jr.

    1976-01-01

    Data handling, communications, and documentation aspects of the ASSESS mission are described. Most experiments provided their own data handling equipment, although some used the airborne computer for backup, and one experiment required real-time computations. Communications facilities were set up to simulate those to be provided between Spacelab and the ground, including a downlink TV system. Mission documentation was kept to a minimum and proved sufficient. Examples are given of the basic documents of the mission.

  12. DoD Electronic Data Interchange (EDI) Convention: ASC X12 Transaction Set 856 Ship Notice/Manifest (Version 003030)

    DTIC Science & Technology

    1993-01-01

    from the Department of Defense Ecutive Agent for Elctronic Com••rce/Electronic Data interrrhange/Ped~on of Logisti Unc.iaassfed/fen-une Systers...Convention: Electronic Commerce ; ANSI X12, X12; 62 electronic standards, electronic business standards; computer-to-computer exchange of data...Ship Notice Manifest Information From Invoicng Party to DFAS Using ASC X12 856A 10.1) Reserved 10. E Reserved CV QUAMY Tr7 4 10O.F Reserved__

  13. Computers for the Disabled.

    ERIC Educational Resources Information Center

    Lazzaro, Joseph J.

    1993-01-01

    Describes adaptive technology for personal computers that accommodate disabled users and may require special equipment including hardware, memory, expansion slots, and ports. Highlights include vision aids, including speech synthesizers, magnification, braille, and optical character recognition (OCR); hearing adaptations; motor-impaired…

  14. Computational Analysis of Behavior.

    PubMed

    Egnor, S E Roian; Branson, Kristin

    2016-07-08

    In this review, we discuss the emerging field of computational behavioral analysis-the use of modern methods from computer science and engineering to quantitatively measure animal behavior. We discuss aspects of experiment design important to both obtaining biologically relevant behavioral data and enabling the use of machine vision and learning techniques for automation. These two goals are often in conflict. Restraining or restricting the environment of the animal can simplify automatic behavior quantification, but it can also degrade the quality or alter important aspects of behavior. To enable biologists to design experiments to obtain better behavioral measurements, and computer scientists to pinpoint fruitful directions for algorithm improvement, we review known effects of artificial manipulation of the animal on behavior. We also review machine vision and learning techniques for tracking, feature extraction, automated behavior classification, and automated behavior discovery, the assumptions they make, and the types of data they work best with.

  15. Autonomic Computing: Panacea or Poppycock?

    NASA Technical Reports Server (NTRS)

    Sterritt, Roy; Hinchey, Mike

    2005-01-01

    Autonomic Computing arose out of a need for a means to cope with rapidly growing complexity of integrating, managing, and operating computer-based systems as well as a need to reduce the total cost of ownership of today's systems. Autonomic Computing (AC) as a discipline was proposed by IBM in 2001, with the vision to develop self-managing systems. As the name implies, the influence for the new paradigm is the human body's autonomic system, which regulates vital bodily functions such as the control of heart rate, the body's temperature and blood flow-all without conscious effort. The vision is to create selfivare through self-* properties. The initial set of properties, in terms of objectives, were self-configuring, self-healing, self-optimizing and self-protecting, along with attributes of self-awareness, self-monitoring and self-adjusting. This self-* list has grown: self-anticipating, self-critical, self-defining, self-destructing, self-diagnosis, self-governing, self-organized, self-reflecting, and self-simulation, for instance.

  16. Object and Facial Recognition in Augmented and Virtual Reality: Investigation into Software, Hardware and Potential Uses

    NASA Technical Reports Server (NTRS)

    Schulte, Erin

    2017-01-01

    As augmented and virtual reality grows in popularity, and more researchers focus on its development, other fields of technology have grown in the hopes of integrating with the up-and-coming hardware currently on the market. Namely, there has been a focus on how to make an intuitive, hands-free human-computer interaction (HCI) utilizing AR and VR that allows users to control their technology with little to no physical interaction with hardware. Computer vision, which is utilized in devices such as the Microsoft Kinect, webcams and other similar hardware has shown potential in assisting with the development of a HCI system that requires next to no human interaction with computing hardware and software. Object and facial recognition are two subsets of computer vision, both of which can be applied to HCI systems in the fields of medicine, security, industrial development and other similar areas.

  17. MER-DIMES : a planetary landing application of computer vision

    NASA Technical Reports Server (NTRS)

    Cheng, Yang; Johnson, Andrew; Matthies, Larry

    2005-01-01

    During the Mars Exploration Rovers (MER) landings, the Descent Image Motion Estimation System (DIMES) was used for horizontal velocity estimation. The DIMES algorithm combines measurements from a descent camera, a radar altimeter and an inertial measurement unit. To deal with large changes in scale and orientation between descent images, the algorithm uses altitude and attitude measurements to rectify image data to level ground plane. Feature selection and tracking is employed in the rectified data to compute the horizontal motion between images. Differences of motion estimates are then compared to inertial measurements to verify correct feature tracking. DIMES combines sensor data from multiple sources in a novel way to create a low-cost, robust and computationally efficient velocity estimation solution, and DIMES is the first use of computer vision to control a spacecraft during planetary landing. In this paper, the detailed implementation of the DIMES algorithm and the results from the two landings on Mars are presented.

  18. A computer architecture for intelligent machines

    NASA Technical Reports Server (NTRS)

    Lefebvre, D. R.; Saridis, G. N.

    1991-01-01

    The Theory of Intelligent Machines proposes a hierarchical organization for the functions of an autonomous robot based on the Principle of Increasing Precision With Decreasing Intelligence. An analytic formulation of this theory using information-theoretic measures of uncertainty for each level of the intelligent machine has been developed in recent years. A computer architecture that implements the lower two levels of the intelligent machine is presented. The architecture supports an event-driven programming paradigm that is independent of the underlying computer architecture and operating system. Details of Execution Level controllers for motion and vision systems are addressed, as well as the Petri net transducer software used to implement Coordination Level functions. Extensions to UNIX and VxWorks operating systems which enable the development of a heterogeneous, distributed application are described. A case study illustrates how this computer architecture integrates real-time and higher-level control of manipulator and vision systems.

  19. The effect of light touch on balance control during overground walking in healthy young adults.

    PubMed

    Oates, A R; Unger, J; Arnold, C M; Fung, J; Lanovaz, J L

    2017-12-01

    Balance control is essential for safe walking. Adding haptic input through light touch may improve walking balance; however, evidence is limited. This research investigated the effect of added haptic input through light touch in healthy young adults during challenging walking conditions. Sixteen individuals walked normally, in tandem, and on a compliant, low-lying balance beam with and without light touch on a railing. Three-dimensional kinematic data were captured to compute stride velocity (m/s), relative time spent in double support (%DS), a medial-lateral margin of stability (MOS ML ) and its variance (MOS ML CV), as well as a symmetry index (SI) for the MOS ML . Muscle activity was evaluated by integrating electromyography signals for the soleus, tibialis anterior, and gluteus medius muscles bilaterally. Adding haptic input decreased stride velocity, increased the %DS, had no effect on the MOS ML magnitude, decreased the MOS ML CV, had no effect on the SI, and increased activity of most muscles examined during normal walking. During tandem walking, stride velocity and the MOS ML CV decreased, while %DS, MOS ML magnitude, SI, and muscle activity did not change with light touch. When walking on a low-lying, compliant balance beam, light touch had no effect on walking velocity, MOS ML magnitude, or muscle activity; however, the %DS increased and the MOS ML CV and SI decreased when lightly touching a railing while walking on the balance beam. The decreases in the MOS ML CV with light touch across all walking conditions suggest that adding haptic input through light touch on a railing may improve balance control during walking through reduced variability.

  20. Assessing human variability in kinetics for exposures to multiple environmental chemicals: a physiologically based pharmacokinetic modeling case study with dichloromethane, benzene, toluene, ethylbenzene, and m-xylene.

    PubMed

    Valcke, Mathieu; Haddad, Sami

    2015-01-01

    The objective of this study was to compare the magnitude of interindividual variability in internal dose for inhalation exposure to single versus multiple chemicals. Physiologically based pharmacokinetic models for adults (AD), neonates (NEO), toddlers (TODD), and pregnant women (PW) were used to simulate inhalation exposure to "low" (RfC-like) or "high" (AEGL-like) air concentrations of benzene (Bz) or dichloromethane (DCM), along with various levels of toluene alone or toluene with ethylbenzene and xylene. Monte Carlo simulations were performed and distributions of relevant internal dose metrics of either Bz or DCM were computed. Area under the blood concentration of parent compound versus time curve (AUC)-based variability in AD, TODD, and PW rose for Bz when concomitant "low" exposure to mixtures of increasing complexities occurred (coefficient of variation (CV) = 16-24%, vs. 12-15% for Bz alone), but remained unchanged considering DCM. Conversely, AUC-based CV in NEO fell (15 to 5% for Bz; 12 to 6% for DCM). Comparable trends were observed considering production of metabolites (AMET), except for NEO's CYP2E1-mediated metabolites of Bz, where an increased CV was observed (20 to 71%). For "high" exposure scenarios, Cmax-based variability of Bz and DCM remained unchanged in AD and PW, but decreased in NEO (CV= 11-16% to 2-6%) and TODD (CV= 12-13% to 7-9%). Conversely, AMET-based variability for both substrates rose in every subpopulation. This study analyzed for the first time the impact of multiple exposures on interindividual variability in toxicokinetics. Evidence indicates that this impact depends upon chemical concentrations and biochemical properties, as well as the subpopulation and internal dose metrics considered.

  1. Textural, nutritional and functional attributes in tomato genotypes for breeding better quality varieties.

    PubMed

    Saha, Supradip; Hedau, Nirmal K; Mahajan, Vinay; Singh, Gyanendra; Gupta, Hari S; Gahalain, Anita

    2010-01-30

    Screening of natural biodiversity for their better quality attributes is of prime importance for quality breeding programmes. A set of 53 tomato genotypes was measured for their textural [skin firmness, pericarp thickness, total soluble solids (TSS)], nutritional [phosphorus (P), potassium (K), iron (Fe), zinc (Zn), copper (Cu), manganese (Mn) and titrable acidity (TA)] and functional (beta-carotene, lycopene and ascorbic acid) quality attributes. Three sets of data (textural, nutritional and functional attributes) were obtained and analysed for their mutual relationships. Wide variations were observed in most of the measurements, e.g. skin firmness (coefficient of variability (CV) 269-612 g), pericarp thickness (CV 1.4-4.9 mm), potassium (CV 229-371 mg 100 g(-1)), iron (CV 611-1772 mg 100 g(-1)), ascorbic acid (CV 12-86 mg 100 g(-1)), suggesting that there are considerable levels of genetic diversity. Significant correlations (P < 0.05, 0.01) were also detected among different attributes of tomato genotypes, such as phosphorus and zinc with a correlation coefficient of 0.74, ascorbic acid and copper of 0.57, pericarp thickness and lycopene of - 0.52. However, there were no correlations between textural and nutritional attributes. Five factors were computed by principal component analysis that explained 66% of the variation in the attributes, among which all micronutrients other than iron, TSS, firmness and beta-carotene were most important. Functional attributes except beta-carotene played a less important role in explaining total variation. This knowledge could aid in the efficient conservation of important parts of the agricultural biodiversity of India. These results are also potentially useful for tomato breeders working on the development of new varieties. (c) 2009 Society of Chemical Industry.

  2. The influence of apical and basal defoliation on the canopy structure and biochemical composition of Vitis vinifera cv. Shiraz grapes and wine

    NASA Astrophysics Data System (ADS)

    Zhang, Pangzhen; Wu, Xiwen; Needs, Sonja; Liu, Di; Fuentes, Sigfredo; Howell, Kate

    2017-07-01

    Defoliation is a commonly used viticultural technique to balance the ratio between grapevine vegetation and fruit. Defoliation is conducted around the fruit zone to reduce the leaf photosynthetic area, and to increase sunlight exposure of grape bunches. Apical leaf removal is not commonly practiced, and therefore its influence on canopy structure and resultant wine aroma is not well studied. This study quantified the influences of apical and basal defoliation on canopy structure parameters using canopy cover photography and computer vision algorithms. The influence of canopy structure changes on the chemical compositions of grapes and wines was investigated over two vintages (2010-11 and 2015-16) in Yarra Valley, Australia. The Shiraz grapevines were subjected to five different treatments: no leaf removal (Ctrl); basal (TB) and apical (TD) leaf removal at veraison and intermediate ripeness, respectively. Basal leaf removal significantly reduced the leaf area index and foliage cover and increased canopy porosity, while apical leaf removal had limited influences on canopy parameters. However, the latter tended to result in lower alcohol level in the finished wine. Statistically significant increases in pH and decreases in TA was observed in shaded grapes, while no significant changes in the color profile and volatile compounds of the resultant wine were found. These results suggest that apical leaf removal is an effective method to reduce wine alcohol concentration with minimal influences on wine composition.

  3. Gradient boosting machine for modeling the energy consumption of commercial buildings

    DOE PAGES

    Touzani, Samir; Granderson, Jessica; Fernandes, Samuel

    2017-11-26

    Accurate savings estimations are important to promote energy efficiency projects and demonstrate their cost-effectiveness. The increasing presence of advanced metering infrastructure (AMI) in commercial buildings has resulted in a rising availability of high frequency interval data. These data can be used for a variety of energy efficiency applications such as demand response, fault detection and diagnosis, and heating, ventilation, and air conditioning (HVAC) optimization. This large amount of data has also opened the door to the use of advanced statistical learning models, which hold promise for providing accurate building baseline energy consumption predictions, and thus accurate saving estimations. The gradientmore » boosting machine is a powerful machine learning algorithm that is gaining considerable traction in a wide range of data driven applications, such as ecology, computer vision, and biology. In the present work an energy consumption baseline modeling method based on a gradient boosting machine was proposed. To assess the performance of this method, a recently published testing procedure was used on a large dataset of 410 commercial buildings. The model training periods were varied and several prediction accuracy metrics were used to evaluate the model's performance. The results show that using the gradient boosting machine model improved the R-squared prediction accuracy and the CV(RMSE) in more than 80 percent of the cases, when compared to an industry best practice model that is based on piecewise linear regression, and to a random forest algorithm.« less

  4. Gradient boosting machine for modeling the energy consumption of commercial buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Touzani, Samir; Granderson, Jessica; Fernandes, Samuel

    Accurate savings estimations are important to promote energy efficiency projects and demonstrate their cost-effectiveness. The increasing presence of advanced metering infrastructure (AMI) in commercial buildings has resulted in a rising availability of high frequency interval data. These data can be used for a variety of energy efficiency applications such as demand response, fault detection and diagnosis, and heating, ventilation, and air conditioning (HVAC) optimization. This large amount of data has also opened the door to the use of advanced statistical learning models, which hold promise for providing accurate building baseline energy consumption predictions, and thus accurate saving estimations. The gradientmore » boosting machine is a powerful machine learning algorithm that is gaining considerable traction in a wide range of data driven applications, such as ecology, computer vision, and biology. In the present work an energy consumption baseline modeling method based on a gradient boosting machine was proposed. To assess the performance of this method, a recently published testing procedure was used on a large dataset of 410 commercial buildings. The model training periods were varied and several prediction accuracy metrics were used to evaluate the model's performance. The results show that using the gradient boosting machine model improved the R-squared prediction accuracy and the CV(RMSE) in more than 80 percent of the cases, when compared to an industry best practice model that is based on piecewise linear regression, and to a random forest algorithm.« less

  5. Kinect Fusion improvement using depth camera calibration

    NASA Astrophysics Data System (ADS)

    Pagliari, D.; Menna, F.; Roncella, R.; Remondino, F.; Pinto, L.

    2014-06-01

    Scene's 3D modelling, gesture recognition and motion tracking are fields in rapid and continuous development which have caused growing demand on interactivity in video-game and e-entertainment market. Starting from the idea of creating a sensor that allows users to play without having to hold any remote controller, the Microsoft Kinect device was created. The Kinect has always attract researchers in different fields, from robotics to Computer Vision (CV) and biomedical engineering as well as third-party communities that have released several Software Development Kit (SDK) versions for Kinect in order to use it not only as a game device but as measurement system. Microsoft Kinect Fusion control libraries (firstly released in March 2013) allow using the device as a 3D scanning and produce meshed polygonal of a static scene just moving the Kinect around. A drawback of this sensor is the geometric quality of the delivered data and the low repeatability. For this reason the authors carried out some investigation in order to evaluate the accuracy and repeatability of the depth measured delivered by the Kinect. The paper will present a throughout calibration analysis of the Kinect imaging sensor, with the aim of establishing the accuracy and precision of the delivered information: a straightforward calibration of the depth sensor in presented and then the 3D data are correct accordingly. Integrating the depth correction algorithm and correcting the IR camera interior and exterior orientation parameters, the Fusion Libraries are corrected and a new reconstruction software is created to produce more accurate models.

  6. TU-FG-201-04: Computer Vision in Autonomous Quality Assurance of Linear Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, H; Jenkins, C; Yu, S

    Purpose: Routine quality assurance (QA) of linear accelerators represents a critical and costly element of a radiation oncology center. Recently, a system was developed to autonomously perform routine quality assurance on linear accelerators. The purpose of this work is to extend this system and contribute computer vision techniques for obtaining quantitative measurements for a monthly multi-leaf collimator (MLC) QA test specified by TG-142, namely leaf position accuracy, and demonstrate extensibility for additional routines. Methods: Grayscale images of a picket fence delivery on a radioluminescent phosphor coated phantom are captured using a CMOS camera. Collected images are processed to correct formore » camera distortions, rotation and alignment, reduce noise, and enhance contrast. The location of each MLC leaf is determined through logistic fitting and a priori modeling based on knowledge of the delivered beams. Using the data collected and the criteria from TG-142, a decision is made on whether or not the leaf position accuracy of the MLC passes or fails. Results: The locations of all MLC leaf edges are found for three different picket fence images in a picket fence routine to 0.1mm/1pixel precision. The program to correct for image alignment and determination of leaf positions requires a runtime of 21– 25 seconds for a single picket, and 44 – 46 seconds for a group of three pickets on a standard workstation CPU, 2.2 GHz Intel Core i7. Conclusion: MLC leaf edges were successfully found using techniques in computer vision. With the addition of computer vision techniques to the previously described autonomous QA system, the system is able to quickly perform complete QA routines with minimal human contribution.« less

  7. Practicality of quantum information processing

    NASA Astrophysics Data System (ADS)

    Lau, Hoi-Kwan

    Quantum Information Processing (QIP) is expected to bring revolutionary enhancement to various technological areas. However, today's QIP applications are far from being practical. The problem involves both hardware issues, i.e., quantum devices are imperfect, and software issues, i.e., the functionality of some QIP applications is not fully understood. Aiming to improve the practicality of QIP, in my PhD research I have studied various topics in quantum cryptography and ion trap quantum computation. In quantum cryptography, I first studied the security of position-based quantum cryptography (PBQC). I discovered a wrong assumption in the previous literature that the cheaters are not allowed to share entangled resources. I proposed entanglement attacks that could cheat all known PBQC protocols. I also studied the practicality of continuous-variable (CV) quantum secret sharing (QSS). While the security of CV QSS was considered by the literature only in the limit of infinite squeezing, I found that finitely squeezed CV resources could also provide finite secret sharing rate. Our work relaxes the stringent resources requirement of implementing QSS. In ion trap quantum computation, I studied the phase error of quantum information induced by dc Stark effect during ion transportation. I found an optimized ion trajectory for which the phase error is the minimum. I also defined a threshold speed, above which ion transportation would induce significant error. In addition, I proposed a new application for ion trap systems as universal bosonic simulators (UBS). I introduced two architectures, and discussed their respective strength and weakness. I illustrated the implementations of bosonic state initialization, transformation, and measurement by applying radiation fields or by varying the trap potential. When comparing with conducting optical experiments, the ion trap UBS is advantageous in higher state initialization efficiency and higher measurement accuracy. Finally, I proposed a new method to re-cool ion qubits during quantum computation. The idea is to transfer the motional excitation of a qubit to another ion that is prepared in the motional ground state. I showed that my method could be ten times faster than current laser cooling techniques, and thus could improve the speed of ion trap quantum computation.

  8. Fishery stock assessment of Kiddi shrimp ( Parapenaeopsis stylifera) in the Northern Arabian Sea Coast of Pakistan by using surplus production models

    NASA Astrophysics Data System (ADS)

    Mohsin, Muhammad; Mu, Yongtong; Memon, Aamir Mahmood; Kalhoro, Muhammad Talib; Shah, Syed Baber Hussain

    2017-07-01

    Pakistani marine waters are under an open access regime. Due to poor management and policy implications, blind fishing is continued which may result in ecological as well as economic losses. Thus, it is of utmost importance to estimate fishery resources before harvesting. In this study, catch and effort data, 1996-2009, of Kiddi shrimp Parapenaeopsis stylifera fishery from Pakistani marine waters was analyzed by using specialized fishery software in order to know fishery stock status of this commercially important shrimp. Maximum, minimum and average capture production of P. stylifera was observed as 15 912 metric tons (mt) (1997), 9 438 mt (2009) and 11 667 mt/a. Two stock assessment tools viz. CEDA (catch and effort data analysis) and ASPIC (a stock production model incorporating covariates) were used to compute MSY (maximum sustainable yield) of this organism. In CEDA, three surplus production models, Fox, Schaefer and Pella-Tomlinson, along with three error assumptions, log, log normal and gamma, were used. For initial proportion (IP) 0.8, the Fox model computed MSY as 6 858 mt (CV=0.204, R 2 =0.709) and 7 384 mt (CV=0.149, R 2 =0.72) for log and log normal error assumption respectively. Here, gamma error produced minimization failure. Estimated MSY by using Schaefer and Pella-Tomlinson models remained the same for log, log normal and gamma error assumptions i.e. 7 083 mt, 8 209 mt and 7 242 mt correspondingly. The Schafer results showed highest goodness of fit R 2 (0.712) values. ASPIC computed MSY, CV, R 2, F MSY and B MSY parameters for the Fox model as 7 219 mt, 0.142, 0.872, 0.111 and 65 280, while for the Logistic model the computed values remained 7 720 mt, 0.148, 0.868, 0.107 and 72 110 correspondingly. Results obtained have shown that P. stylifera has been overexploited. Immediate steps are needed to conserve this fishery resource for the future and research on other species of commercial importance is urgently needed.

  9. [Effect of electroacupuncture at Zhongwan(CV 12) on skin microcirculatory blood perfusion units along the conception vessel in yang-deficiency volunteers].

    PubMed

    Shen, Cimin; Xu, Jinsen; Zheng, Shuxia; Lin, Lijiao; Yang, Xiaomei; Liu, Chunlan

    2016-02-01

    To observe the effect of electroacupuncture(EA) at Zhongwan(CV 12) on the energy metabolism along the conception vessel(CV) in volunteers with yang-deficiency constitution,and to explore the relationship of electroacupuncture regulation and body constitution. Eighteen volunteers with mild constitution and 18 volunteers with yang-deficiency constitution were collected out of 200 students of Fujian University of TCM by body constitution questionnaire. Skin microcirculatory blood perfusion units (MBPU) at Danzhong (CV 17), Xiawan(CV 10) and Qihai(CV 6) of CV were measured by a laser Doppler flowmetry in the normal condition and after EA stimulation at Zhongwan(CV 12) for 20 min. (1)Before treatment, (1)MBPU values at Danzhong(CV 17), Xiawan(CV 10) and Qihai(CV 6) in the yang-deficiency constitution group were lower than those in the mild constitution group,but there was no statistical significance (both P>0. 05) except Danzhong(CV 17) (P<0. 01). (Z)As for the three acupoints in the mild constitution group, MBPU level of Danzhong(CV 17) was higher than that of Xiawan(CV 10) without statistical significance(P->0. 05),and MBPU values of Danzhong(CV 17) and Xiawan(CV 10) were higher than that of Qihai(CV 6) (both P<0. 01). (3About the three acupoints in the yang-deficiency constitution group, MBPU result of Danzhong(CV 17) was lower than the value of Xiawan(CV 10), but higher compared with Qihai(CV 6)(P<0. 05, P<0. 01). MBPU of Xiawan(CV 10) was higher than Qihai (CV 6) as well(P<0. 01). (2) MBPU values of Danzhong(CV 17), Xiawan(CV 10) and Qihai(CV 6) were increased apparently compared with those before treatment after EA stimulation at Zhongwan(CV 12) for 20 min in the two groups(all P<0. 01). (3) The rise rates of MBPU level about Danzhong(CV 17) and Qihai(CV 6) in the yang-deficiency constitution group were higher than those in the mild constitution group without statistical significance after EA at Zhongwan(CV 12) for 20 min(both P>0. 05). The energy metabolism in CV of volunteers with yang-deficiency constitution is declined, especially Danzhong(CV 17). EA can rise energy metabolism in CV of mild or yang-deficiency constitution volunteers through regulating MBPU along meridian.

  10. Active vision and image/video understanding with decision structures based on the network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2003-08-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. The ability of human brain to emulate knowledge structures in the form of networks-symbolic models is found. And that means an important shift of paradigm in our knowledge about brain from neural networks to "cortical software". Symbols, predicates and grammars naturally emerge in such active multilevel hierarchical networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type decision structure created via multilevel hierarchical compression of visual information. Mid-level vision processes like clustering, perceptual grouping, separation of figure from ground, are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models works similar to frames and agents, combines learning, classification, analogy together with higher-level model-based reasoning into a single framework. Such models do not require supercomputers. Based on such principles, and using methods of Computational intelligence, an Image Understanding system can convert images into the network-symbolic knowledge models, and effectively resolve uncertainty and ambiguity, providing unifying representation for perception and cognition. That allows creating new intelligent computer vision systems for robotic and defense industries.

  11. Computer vision syndrome prevalence, knowledge and associated factors among Saudi Arabia University Students: Is it a serious problem?

    PubMed

    Al Rashidi, Sultan H; Alhumaidan, H

    2017-01-01

    Computers and other visual display devices are now an essential part of our daily life. With the increased use, a very large population is experiencing sundry ocular symptoms globally such as dry eyes, eye strain, irritation, and redness of the eyes to name a few. Collectively, all such computer related symptoms are usually referred to as computer vision syndrome (CVS). The current study aims to define the prevalence, knowledge in community, pathophysiology, factors associated, and prevention of CVS. This is a cross-sectional study conducted in Qassim University College of Medicine during a period of 1 year from January 2015 to January 2016 using a questionnaire to collect relevant data including demographics and various variables to be studied. 634 students were inducted from a public sector University of Qassim, Saudi Arabia, regardless of their age and gender. The data were then statistically analyzed on SPSS version 22, and the descriptive data were expressed as percentages, mode, and median using graphs where needed. A total of 634 students with a mean age of 21. 40, Std 1.997 and Range 7 (18-25) were included as study subjects with a male predominance (77.28%). Of the total patients, majority (459, 72%) presented with acute symptoms while remaining had chronic problems. A clear-cut majority was carrying the symptoms for <5 days and >1 month. The statistical analysis revealed serious symptoms in the majority of study subjects especially those who are permanent users of a computer for long hours. Continuous use of computers for long hours is found to have severe problems of vision especially in those who are using computers and similar devices for a long duration.

  12. Computer vision syndrome prevalence, knowledge and associated factors among Saudi Arabia University Students: Is it a serious problem?

    PubMed Central

    Al Rashidi, Sultan H.; Alhumaidan, H.

    2017-01-01

    Objectives: Computers and other visual display devices are now an essential part of our daily life. With the increased use, a very large population is experiencing sundry ocular symptoms globally such as dry eyes, eye strain, irritation, and redness of the eyes to name a few. Collectively, all such computer related symptoms are usually referred to as computer vision syndrome (CVS). The current study aims to define the prevalence, knowledge in community, pathophysiology, factors associated, and prevention of CVS. Methods: This is a cross-sectional study conducted in Qassim University College of Medicine during a period of 1 year from January 2015 to January 2016 using a questionnaire to collect relevant data including demographics and various variables to be studied. 634 students were inducted from a public sector University of Qassim, Saudi Arabia, regardless of their age and gender. The data were then statistically analyzed on SPSS version 22, and the descriptive data were expressed as percentages, mode, and median using graphs where needed. Results: A total of 634 students with a mean age of 21. 40, Std 1.997 and Range 7 (18-25) were included as study subjects with a male predominance (77.28%). Of the total patients, majority (459, 72%) presented with acute symptoms while remaining had chronic problems. A clear-cut majority was carrying the symptoms for <5 days and >1 month. The statistical analysis revealed serious symptoms in the majority of study subjects especially those who are permanent users of a computer for long hours. Conclusion: Continuous use of computers for long hours is found to have severe problems of vision especially in those who are using computers and similar devices for a long duration. PMID:29114189

  13. CPAP and hypertension in nonsleepy patients.

    PubMed

    Phillips, Barbara; Shafazand, Shirin

    2013-02-01

    Is continuous positive airway pressure (CPAP) therapy better than no therapy in reducing the incidence of hypertension or cardiovascular (CV) events in a cohort of nonsleepy patients with obstructive sleep apnea (OSA)? Randomized, controlled trial; no placebo CPAP used. ClinicalTrials.gov Identifier: NCT00127348. Randomization was performed using a computer generated list of random numbers in the coordinating center and results were mailed to participating centers in numbered opaque envelopes. Primary outcome was evaluated by individuals not involved in the study and who were blinded to patient allocation. Patients, investigators, and the statistician were not blinded. median 4 (interquartile range, 2.7-4.4) years. 14 academic medical centers in Spain. 725 adults (mean age 51.8 y, 14% women) who were diagnosed with OSA with apnea hypopnea index (AHI) ≥ 20 events per hour and Epworth sleepiness score (ESS) ≤ 10 were randomized. Subjects with previous CV events were excluded. However, patients with a history of hypertension were not excluded (50% of the sample were hypertensive at baseline). Patients were randomized to receive CPAP treatment or no active intervention. All participants received dietary counseling and advice about sleep hygiene. The primary outcome was the incidence of either systemic hypertension (among participants who were normotensive at baseline) or CV events (among all participants). The secondary outcome was the association between the incidence of hypertension or CV events (nonfatal myocardial infarction, nonfatal stroke, transient ischemic attack, hospitalization for unstable angina or arrhythmia, heart failure, and CV death) and the severity of OSA assessed by the AHI and oxygen saturation. The sample size was calculated assuming that the incidence of hypertension or new CV event in this population over a period of 3 years would be 10% annually; 345 patients per group were needed to detect a 60% reduction in incidence of new hypertension or CV events (90% power, 2-sided α = 0.05, assuming 10% study dropout). 83% complete (only patients who received the allocated intervention were included in analysis of primary outcome). A total of 147 patients with new hypertension and 59 cardiovascular events were identified (Table). In the CPAP group, there were 68 patients with incident hypertension and 28 CV events. Of the 357 participants in the CPAP group, 127 used CPAP < 4 hours/night (36%). In the control group, there were 79 patients with new hypertension and 31 CV events. There was no statistically significant difference between the groups in the primary outcome. TableRisk for incident hypertension or cardiovascular event In adults with moderate to severe OSA and no symptoms of daytime sleepiness, CPAP therapy did not reduce incident hypertension or CV event compared with no active therapy. Instituto de Salud Carlos III (PI 04/0165) (Fondo de Investigaciones Sanitarios, Ministerio de Sanidad y Consumo, Spain), Spanish Respiratory Society (SEPAR) (Barcelona), Resmed (Bella Vista, Australia), Air Products-Carburos Metalicos (Barcelona), Respironics (Murrysville, Pennsylvania), and Breas Medical (Madrid, Spain).

  14. Computing Visible-Surface Representations,

    DTIC Science & Technology

    1985-03-01

    Terzopoulos N00014-75-C-0643 9. PERFORMING ORGANIZATION NAME AMC ADDRESS 10. PROGRAM ELEMENT. PROJECT, TASK Artificial Inteligence Laboratory AREA A...Massachusetts Institute of lechnolog,. Support lbr the laboratory’s Artificial Intelligence research is provided in part by the Advanced Rtccarcl Proj...dynamically maintaining visible surface representations. Whether the intention is to model human vision or to design competent artificial vision systems

  15. Bird Vision System

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The Bird Vision system is a multicamera photogrammerty software application that runs on a Microsoft Windows XP platform and was developed at Kennedy Space Center by ASRC Aerospace. This software system collects data about the locations of birds within a volume centered on the Space Shuttle and transmits it in real time to the laptop computer of a test director in the Launch Control Center (LCC) Firing Room.

  16. The Role of Prototype Learning in Hierarchical Models of Vision

    ERIC Educational Resources Information Center

    Thomure, Michael David

    2014-01-01

    I conduct a study of learning in HMAX-like models, which are hierarchical models of visual processing in biological vision systems. Such models compute a new representation for an image based on the similarity of image sub-parts to a number of specific patterns, called prototypes. Despite being a central piece of the overall model, the issue of…

  17. New Visions of Reality: Multimedia and Education.

    ERIC Educational Resources Information Center

    Ambron, Sueann

    1986-01-01

    Multimedia is a powerful tool that will change both the way we look at knowledge and our vision of reality, as well as our educational system and the business world. Multimedia as used here refers to the innovation of mixing text, audio, and video through the use of a computer. Not only will there be new products emerging from multimedia uses, but…

  18. Synthetic Vision Displays for Planetary and Lunar Lander Vehicles

    NASA Technical Reports Server (NTRS)

    Arthur, Jarvis J., III; Prinzel, Lawrence J., III; Williams, Steven P.; Shelton, Kevin J.; Kramer, Lynda J.; Bailey, Randall E.; Norman, Robert M.

    2008-01-01

    Aviation research has demonstrated that Synthetic Vision (SV) technology can substantially enhance situation awareness, reduce pilot workload, improve aviation safety, and promote flight path control precision. SV, and related flight deck technologies are currently being extended for application in planetary exploration vehicles. SV, in particular, holds significant potential for many planetary missions since the SV presentation provides a computer-generated view for the flight crew of the terrain and other significant environmental characteristics independent of the outside visibility conditions, window locations, or vehicle attributes. SV allows unconstrained control of the computer-generated scene lighting, terrain coloring, and virtual camera angles which may provide invaluable visual cues to pilots/astronauts, not available from other vision technologies. In addition, important vehicle state information may be conformally displayed on the view such as forward and down velocities, altitude, and fuel remaining to enhance trajectory control and vehicle system status. The paper accompanies a conference demonstration that introduced a prototype NASA Synthetic Vision system for lunar lander spacecraft. The paper will describe technical challenges and potential solutions to SV applications for the lunar landing mission, including the requirements for high-resolution lunar terrain maps, accurate positioning and orientation, and lunar cockpit display concepts to support projected mission challenges.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Bing; Tian, Xuedong; Wang, Qian

    Purpose: Accurate detection of pulmonary nodules remains a technical challenge in computer-aided diagnosis systems because some nodules may adhere to the blood vessels or the lung wall, which have low contrast compared to the surrounding tissues. In this paper, the analysis of typical shape features of candidate nodules based on a shape constraint Chan–Vese (CV) model combined with calculation of the number of blood branches adhered to nodule candidates is proposed to reduce false positive (FP) nodules from candidate nodules. Methods: The proposed scheme consists of three major stages: (1) Segmentation of lung parenchyma from computed tomography images. (2) Extractionmore » of candidate nodules. (3) Reduction of FP nodules. A gray level enhancement combined with a spherical shape enhancement filter is introduced to extract the candidate nodules and their sphere-like contour regions. FPs are removed by analysis of the typical shape features of nodule candidates based on the CV model using spherical constraint and by investigating the number of blood branches adhered to the candidate nodules. The constrained shapes of CV model are automatically achieved from the extracted candidate nodules. Results: The detection performance was evaluated on 127 nodules of 103 cases including three types of challenging nodules, which are juxta-pleural nodules, juxta-vascular nodules, and ground glass opacity nodules. The free-receiver operating characteristic (FROC) curve shows that the proposed method is able to detect 88% of all the nodules in the data set with 4 FPs per case. Conclusions: Evaluation shows that the authors’ method is feasible and effective for detection of three types of nodules in this study.« less

  20. Three-Dimensional Images For Robot Vision

    NASA Astrophysics Data System (ADS)

    McFarland, William D.

    1983-12-01

    Robots are attracting increased attention in the industrial productivity crisis. As one significant approach for this nation to maintain technological leadership, the need for robot vision has become critical. The "blind" robot, while occupying an economical niche at present is severely limited and job specific, being only one step up from the numerical controlled machines. To successfully satisfy robot vision requirements a three dimensional representation of a real scene must be provided. Several image acquistion techniques are discussed with more emphasis on the laser radar type instruments. The autonomous vehicle is also discussed as a robot form, and the requirements for these applications are considered. The total computer vision system requirement is reviewed with some discussion of the major techniques in the literature for three dimensional scene analysis.

Top