Imaging, object detection, and change detection with a polarized multistatic GPR array
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beer, N. Reginald; Paglieroni, David W.
A polarized detection system performs imaging, object detection, and change detection factoring in the orientation of an object relative to the orientation of transceivers. The polarized detection system may operate on one of several modes of operation based on whether the imaging, object detection, or change detection is performed separately for each transceiver orientation. In combined change mode, the polarized detection system performs imaging, object detection, and change detection separately for each transceiver orientation, and then combines changes across polarizations. In combined object mode, the polarized detection system performs imaging and object detection separately for each transceiver orientation, and thenmore » combines objects across polarizations and performs change detection on the result. In combined image mode, the polarized detection system performs imaging separately for each transceiver orientation, and then combines images across polarizations and performs object detection followed by change detection on the result.« less
Parietal and frontal object areas underlie perception of object orientation in depth.
Niimi, Ryosuke; Saneyoshi, Ayako; Abe, Reiko; Kaminaga, Tatsuro; Yokosawa, Kazuhiko
2011-05-27
Recent studies have shown that the human parietal and frontal cortices are involved in object image perception. We hypothesized that the parietal/frontal object areas play a role in differentiating the orientations (i.e., views) of an object. By using functional magnetic resonance imaging, we compared brain activations while human observers differentiated between two object images in depth-orientation (orientation task) and activations while they differentiated the images in object identity (identity task). The left intraparietal area, right angular gyrus, and right inferior frontal areas were activated more for the orientation task than for the identity task. The occipitotemporal object areas, however, were activated equally for the two tasks. No region showed greater activation for the identity task. These results suggested that the parietal/frontal object areas encode view-dependent visual features and underlie object orientation perception. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Determining Object Orientation from a Single Image Using Multiple Information Sources.
1984-06-01
object surface. Location of the image ellipse is accomplished by exploiting knowledge about object boundaries and image intensity gradients . -. The...Using Intensity Gradient Information for Ellipse fitting ........ .51 4.3.7 Orientation From Ellipses .............................. 53 4.3.8 Application...object boundaries and image intensity gradients . The orientation information from each of these three methods is combined using a "plausibility" function
Contour-based object orientation estimation
NASA Astrophysics Data System (ADS)
Alpatov, Boris; Babayan, Pavel
2016-04-01
Real-time object orientation estimation is an actual problem of computer vision nowadays. In this paper we propose an approach to estimate an orientation of objects lacking axial symmetry. Proposed algorithm is intended to estimate orientation of a specific known 3D object, so 3D model is required for learning. The proposed orientation estimation algorithm consists of 2 stages: learning and estimation. Learning stage is devoted to the exploring of studied object. Using 3D model we can gather set of training images by capturing 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. It minimizes the training image set. Gathered training image set is used for calculating descriptors, which will be used in the estimation stage of the algorithm. The estimation stage is focusing on matching process between an observed image descriptor and the training image descriptors. The experimental research was performed using a set of images of Airbus A380. The proposed orientation estimation algorithm showed good accuracy (mean error value less than 6°) in all case studies. The real-time performance of the algorithm was also demonstrated.
NASA Astrophysics Data System (ADS)
Alpatov, Boris; Babayan, Pavel; Ershov, Maksim; Strotov, Valery
2016-10-01
This paper describes the implementation of the orientation estimation algorithm in FPGA-based vision system. An approach to estimate an orientation of objects lacking axial symmetry is proposed. Suggested algorithm is intended to estimate orientation of a specific known 3D object based on object 3D model. The proposed orientation estimation algorithm consists of two stages: learning and estimation. Learning stage is devoted to the exploring of studied object. Using 3D model we can gather set of training images by capturing 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. Gathered training image set is used for calculating descriptors, which will be used in the estimation stage of the algorithm. The estimation stage is focusing on matching process between an observed image descriptor and the training image descriptors. The experimental research was performed using a set of images of Airbus A380. The proposed orientation estimation algorithm showed good accuracy in all case studies. The real-time performance of the algorithm in FPGA-based vision system was demonstrated.
Orientation estimation of anatomical structures in medical images for object recognition
NASA Astrophysics Data System (ADS)
Bağci, Ulaş; Udupa, Jayaram K.; Chen, Xinjian
2011-03-01
Recognition of anatomical structures is an important step in model based medical image segmentation. It provides pose estimation of objects and information about "where" roughly the objects are in the image and distinguishing them from other object-like entities. In,1 we presented a general method of model-based multi-object recognition to assist in segmentation (delineation) tasks. It exploits the pose relationship that can be encoded, via the concept of ball scale (b-scale), between the binary training objects and their associated grey images. The goal was to place the model, in a single shot, close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. Unlike position and scale parameters, we observe that orientation parameters require more attention when estimating the pose of the model as even small differences in orientation parameters can lead to inappropriate recognition. Motivated from the non-Euclidean nature of the pose information, we propose in this paper the use of non-Euclidean metrics to estimate orientation of the anatomical structures for more accurate recognition and segmentation. We statistically analyze and evaluate the following metrics for orientation estimation: Euclidean, Log-Euclidean, Root-Euclidean, Procrustes Size-and-Shape, and mean Hermitian metrics. The results show that mean Hermitian and Cholesky decomposition metrics provide more accurate orientation estimates than other Euclidean and non-Euclidean metrics.
Mirror-Image Confusions: Implications for Representation and Processing of Object Orientation
ERIC Educational Resources Information Center
Gregory, Emma; McCloskey, Michael
2010-01-01
Perceiving the orientation of objects is important for interacting with the world, yet little is known about the mental representation or processing of object orientation information. The tendency of humans and other species to confuse mirror images provides a potential clue. However, the appropriate characterization of this phenomenon is not…
Orientation Modeling for Amateur Cameras by Matching Image Line Features and Building Vector Data
NASA Astrophysics Data System (ADS)
Hung, C. H.; Chang, W. C.; Chen, L. C.
2016-06-01
With the popularity of geospatial applications, database updating is getting important due to the environmental changes over time. Imagery provides a lower cost and efficient way to update the database. Three dimensional objects can be measured by space intersection using conjugate image points and orientation parameters of cameras. However, precise orientation parameters of light amateur cameras are not always available due to their costliness and heaviness of precision GPS and IMU. To automatize data updating, the correspondence of object vector data and image may be built to improve the accuracy of direct georeferencing. This study contains four major parts, (1) back-projection of object vector data, (2) extraction of image feature lines, (3) object-image feature line matching, and (4) line-based orientation modeling. In order to construct the correspondence of features between an image and a building model, the building vector features were back-projected onto the image using the initial camera orientation from GPS and IMU. Image line features were extracted from the imagery. Afterwards, the matching procedure was done by assessing the similarity between the extracted image features and the back-projected ones. Then, the fourth part utilized line features in orientation modeling. The line-based orientation modeling was performed by the integration of line parametric equations into collinearity condition equations. The experiment data included images with 0.06 m resolution acquired by Canon EOS Mark 5D II camera on a Microdrones MD4-1000 UAV. Experimental results indicate that 2.1 pixel accuracy may be reached, which is equivalent to 0.12 m in the object space.
Pinciroli, F; Combi, C; Pozzi, G
1995-02-01
Use of data base techniques to store medical records has been going on for more than 40 years. Some aspects still remain unresolved, e.g., the management of textual data and image data within a single system. Object-orientation techniques applied to a database management system (DBMS) allow the definition of suitable data structures (e.g., to store digital images): some facilities allow the use of predefined structures when defining new ones. Currently available object-oriented DBMS, however, still need improvements both in the schema update and in the query facilities. This paper describes a prototype of a medical record that includes some multimedia features, managing both textual and image data. The prototype here described considers data from the medical records of patients subjected to percutaneous transluminal coronary artery angioplasty. We developed it on a Sun workstation with a Unix operating system and ONTOS as an object-oriented DBMS.
Schmid, Anita M.; Victor, Jonathan D.
2014-01-01
When analyzing a visual image, the brain has to achieve several goals quickly. One crucial goal is to rapidly detect parts of the visual scene that might be behaviorally relevant, while another one is to segment the image into objects, to enable an internal representation of the world. Both of these processes can be driven by local variations in any of several image attributes such as luminance, color, and texture. Here, focusing on texture defined by local orientation, we propose that the two processes are mediated by separate mechanisms that function in parallel. More specifically, differences in orientation can cause an object to “pop out” and attract visual attention, if its orientation differs from that of the surrounding objects. Differences in orientation can also signal a boundary between objects and therefore provide useful information for image segmentation. We propose that contextual response modulations in primary visual cortex (V1) are responsible for orientation pop-out, while a different kind of receptive field nonlinearity in secondary visual cortex (V2) is responsible for orientation-based texture segmentation. We review a recent experiment that led us to put forward this hypothesis along with other research literature relevant to this notion. PMID:25064441
Embodied memory allows accurate and stable perception of hidden objects despite orientation change.
Pan, Jing Samantha; Bingham, Ned; Bingham, Geoffrey P
2017-07-01
Rotating a scene in a frontoparallel plane (rolling) yields a change in orientation of constituent images. When using only information provided by static images to perceive a scene after orientation change, identification performance typically decreases (Rock & Heimer, 1957). However, rolling generates optic flow information that relates the discrete, static images (before and after the change) and forms an embodied memory that aids recognition. The embodied memory hypothesis predicts that upon detecting a continuous spatial transformation of image structure, or in other words, seeing the continuous rolling process and objects undergoing rolling observers should accurately perceive objects during and after motion. Thus, in this case, orientation change should not affect performance. We tested this hypothesis in three experiments and found that (a) using combined optic flow and image structure, participants identified locations of previously perceived but currently occluded targets with great accuracy and stability (Experiment 1); (b) using combined optic flow and image structure information, participants identified hidden targets equally well with or without 30° orientation changes (Experiment 2); and (c) when the rolling was unseen, identification of hidden targets after orientation change became worse (Experiment 3). Furthermore, when rolling was unseen, although target identification was better when participants were told about the orientation change than when they were not told, performance was still worse than when there was no orientation change. Therefore, combined optic flow and image structure information, not mere knowledge about the rolling, enables accurate and stable perception despite orientation change. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Image BOSS: a biomedical object storage system
NASA Astrophysics Data System (ADS)
Stacy, Mahlon C.; Augustine, Kurt E.; Robb, Richard A.
1997-05-01
Researchers using biomedical images have data management needs which are oriented perpendicular to clinical PACS. The image BOSS system is designed to permit researchers to organize and select images based on research topic, image metadata, and a thumbnail of the image. Image information is captured from existing images in a Unix based filesystem, stored in an object oriented database, and presented to the user in a familiar laboratory notebook metaphor. In addition, the ImageBOSS is designed to provide an extensible infrastructure for future content-based queries directly on the images.
Rodríguez, Jaime; Martín, María T; Herráez, José; Arias, Pedro
2008-12-10
Photogrammetry is a science with many fields of application in civil engineering where image processing is used for different purposes. In most cases, the use of multiple images simultaneously for the reconstruction of 3D scenes is commonly used. However, the use of isolated images is becoming more and more frequent, for which it is necessary to calculate the orientation of the image with respect to the object space (exterior orientation), which is usually made through three rotations through known points in the object space (Euler angles). We describe the resolution of this problem by means of a single rotation through the vanishing line of the image space and completely external to the object, to be more precise, without any contact with it. The results obtained appear to be optimal, and the procedure is simple and of great utility, since no points over the object are required, which is very useful in situations where access is difficult.
An ECG storage and retrieval system embedded in client server HIS utilizing object-oriented DB.
Wang, C; Ohe, K; Sakurai, T; Nagase, T; Kaihara, S
1996-02-01
In the University of Tokyo Hospital, the improved client server HIS has been applied to clinical practice and physicians can order prescription, laboratory examination, ECG examination and radiographic examination, etc. directly by themselves and read results of these examinations, except medical signal waves, schema and image, on UNIX workstations. Recently, we designed and developed an ECG storage and retrieval system embedded in the client server HIS utilizing object-oriented database to take the first step in dealing with digitized signal, schema and image data and show waves, graphics, and images directly to physicians by the client server HIS. The system was developed based on object-oriented analysis and design, and implemented with object-oriented database management system (OODMS) and C++ programming language. In this paper, we describe the ECG data model, functions of the storage and retrieval system, features of user interface and the result of its implementation in the HIS.
Mao, Xue Gang; Du, Zi Han; Liu, Jia Qian; Chen, Shu Xin; Hou, Ji Yu
2018-01-01
Traditional field investigation and artificial interpretation could not satisfy the need of forest gaps extraction at regional scale. High spatial resolution remote sensing image provides the possibility for regional forest gaps extraction. In this study, we used object-oriented classification method to segment and classify forest gaps based on QuickBird high resolution optical remote sensing image in Jiangle National Forestry Farm of Fujian Province. In the process of object-oriented classification, 10 scales (10-100, with a step length of 10) were adopted to segment QuickBird remote sensing image; and the intersection area of reference object (RA or ) and intersection area of segmented object (RA os ) were adopted to evaluate the segmentation result at each scale. For segmentation result at each scale, 16 spectral characteristics and support vector machine classifier (SVM) were further used to classify forest gaps, non-forest gaps and others. The results showed that the optimal segmentation scale was 40 when RA or was equal to RA os . The accuracy difference between the maximum and minimum at different segmentation scales was 22%. At optimal scale, the overall classification accuracy was 88% (Kappa=0.82) based on SVM classifier. Combining high resolution remote sensing image data with object-oriented classification method could replace the traditional field investigation and artificial interpretation method to identify and classify forest gaps at regional scale.
Automatic orientation and 3D modelling from markerless rock art imagery
NASA Astrophysics Data System (ADS)
Lerma, J. L.; Navarro, S.; Cabrelles, M.; Seguí, A. E.; Hernández, D.
2013-02-01
This paper investigates the use of two detectors and descriptors on image pyramids for automatic image orientation and generation of 3D models. The detectors and descriptors replace manual measurements and are used to detect, extract and match features across multiple imagery. The Scale-Invariant Feature Transform (SIFT) and the Speeded Up Robust Features (SURF) will be assessed based on speed, number of features, matched features, and precision in image and object space depending on the adopted hierarchical matching scheme. The influence of applying in addition Area Based Matching (ABM) with normalised cross-correlation (NCC) and least squares matching (LSM) is also investigated. The pipeline makes use of photogrammetric and computer vision algorithms aiming minimum interaction and maximum accuracy from a calibrated camera. Both the exterior orientation parameters and the 3D coordinates in object space are sequentially estimated combining relative orientation, single space resection and bundle adjustment. The fully automatic image-based pipeline presented herein to automate the image orientation step of a sequence of terrestrial markerless imagery is compared with manual bundle block adjustment and terrestrial laser scanning (TLS) which serves as ground truth. The benefits of applying ABM after FBM will be assessed both in image and object space for the 3D modelling of a complex rock art shelter.
Lowe, H. J.
1993-01-01
This paper describes Image Engine, an object-oriented, microcomputer-based, multimedia database designed to facilitate the storage and retrieval of digitized biomedical still images, video, and text using inexpensive desktop computers. The current prototype runs on Apple Macintosh computers and allows network database access via peer to peer file sharing protocols. Image Engine supports both free text and controlled vocabulary indexing of multimedia objects. The latter is implemented using the TView thesaurus model developed by the author. The current prototype of Image Engine uses the National Library of Medicine's Medical Subject Headings (MeSH) vocabulary (with UMLS Meta-1 extensions) as its indexing thesaurus. PMID:8130596
IDL Object Oriented Software for Hinode/XRT Image Analysis
NASA Astrophysics Data System (ADS)
Higgins, P. A.; Gallagher, P. T.
2008-09-01
We have developed a set of object oriented IDL routines that enable users to search, download and analyse images from the X-Ray Telescope (XRT) on-board Hinode. In this paper, we give specific examples of how the object can be used and how multi-instrument data analysis can be performed. The XRT object is a highly versatile and powerful IDL object, which will prove to be a useful tool for solar researchers. This software utilizes the generic Framework object available within the GEN branch of SolarSoft.
Luo, Jiebo; Boutell, Matthew
2005-05-01
Automatic image orientation detection for natural images is a useful, yet challenging research topic. Humans use scene context and semantic object recognition to identify the correct image orientation. However, it is difficult for a computer to perform the task in the same way because current object recognition algorithms are extremely limited in their scope and robustness. As a result, existing orientation detection methods were built upon low-level vision features such as spatial distributions of color and texture. Discrepant detection rates have been reported for these methods in the literature. We have developed a probabilistic approach to image orientation detection via confidence-based integration of low-level and semantic cues within a Bayesian framework. Our current accuracy is 90 percent for unconstrained consumer photos, impressive given the findings of a psychophysical study conducted recently. The proposed framework is an attempt to bridge the gap between computer and human vision systems and is applicable to other problems involving semantic scene content understanding.
Background Oriented Schlieren Using Celestial Objects
NASA Technical Reports Server (NTRS)
Haering, Edward, A., Jr. (Inventor); Hill, Michael A (Inventor)
2017-01-01
The present invention is a system and method of visualizing fluid flow around an object, such as an aircraft or wind turbine, by aligning the object between an imaging system and a celestial object having a speckled background, taking images, and comparing those images to obtain fluid flow visualization.
Object-oriented design of medical imaging software.
Ligier, Y; Ratib, O; Logean, M; Girard, C; Perrier, R; Scherrer, J R
1994-01-01
A special software package for interactive display and manipulation of medical images was developed at the University Hospital of Geneva, as part of a hospital wide Picture Archiving and Communication System (PACS). This software package, called Osiris, was especially designed to be easily usable and adaptable to the needs of noncomputer-oriented physicians. The Osiris software has been developed to allow the visualization of medical images obtained from any imaging modality. It provides generic manipulation tools, processing tools, and analysis tools more specific to clinical applications. This software, based on an object-oriented paradigm, is portable and extensible. Osiris is available on two different operating systems: the Unix X-11/OSF-Motif based workstations, and the Macintosh family.
Object-Oriented Query Language For Events Detection From Images Sequences
NASA Astrophysics Data System (ADS)
Ganea, Ion Eugen
2015-09-01
In this paper is presented a method to represent the events extracted from images sequences and the query language used for events detection. Using an object oriented model the spatial and temporal relationships between salient objects and also between events are stored and queried. This works aims to unify the storing and querying phases for video events processing. The object oriented language syntax used for events processing allow the instantiation of the indexes classes in order to improve the accuracy of the query results. The experiments were performed on images sequences provided from sport domain and it shows the reliability and the robustness of the proposed language. To extend the language will be added a specific syntax for constructing the templates for abnormal events and for detection of the incidents as the final goal of the research.
Object-oriented analysis and design of an ECG storage and retrieval system integrated with an HIS.
Wang, C; Ohe, K; Sakurai, T; Nagase, T; Kaihara, S
1996-03-01
For a hospital information system, object-oriented methodology plays an increasingly important role, especially for the management of digitized data, e.g., the electrocardiogram, electroencephalogram, electromyogram, spirogram, X-ray, CT and histopathological images, which are not yet computerized in most hospitals. As a first step in an object-oriented approach to hospital information management and storing medical data in an object-oriented database, we connected electrocardiographs to a hospital network and established the integration of ECG storage and retrieval systems with a hospital information system. In this paper, the object-oriented analysis and design of the ECG storage and retrieval systems is reported.
Tirado-Ramos, Alfredo; Hu, Jingkun; Lee, K.P.
2002-01-01
Supplement 23 to DICOM (Digital Imaging and Communications for Medicine), Structured Reporting, is a specification that supports a semantically rich representation of image and waveform content, enabling experts to share image and related patient information. DICOM SR supports the representation of textual and coded data linked to images and waveforms. Nevertheless, the medical information technology community needs models that work as bridges between the DICOM relational model and open object-oriented technologies. The authors assert that representations of the DICOM Structured Reporting standard, using object-oriented modeling languages such as the Unified Modeling Language, can provide a high-level reference view of the semantically rich framework of DICOM and its complex structures. They have produced an object-oriented model to represent the DICOM SR standard and have derived XML-exchangeable representations of this model using World Wide Web Consortium specifications. They expect the model to benefit developers and system architects who are interested in developing applications that are compliant with the DICOM SR specification. PMID:11751804
Myint, Soe W.; Yuan, May; Cerveny, Randall S.; Giri, Chandra P.
2008-01-01
Remote sensing techniques have been shown effective for large-scale damage surveys after a hazardous event in both near real-time or post-event analyses. The paper aims to compare accuracy of common imaging processing techniques to detect tornado damage tracks from Landsat TM data. We employed the direct change detection approach using two sets of images acquired before and after the tornado event to produce a principal component composite images and a set of image difference bands. Techniques in the comparison include supervised classification, unsupervised classification, and object-oriented classification approach with a nearest neighbor classifier. Accuracy assessment is based on Kappa coefficient calculated from error matrices which cross tabulate correctly identified cells on the TM image and commission and omission errors in the result. Overall, the Object-oriented Approach exhibits the highest degree of accuracy in tornado damage detection. PCA and Image Differencing methods show comparable outcomes. While selected PCs can improve detection accuracy 5 to 10%, the Object-oriented Approach performs significantly better with 15-20% higher accuracy than the other two techniques. PMID:27879757
An object-oriented framework for medical image registration, fusion, and visualization.
Zhu, Yang-Ming; Cochoff, Steven M
2006-06-01
An object-oriented framework for image registration, fusion, and visualization was developed based on the classic model-view-controller paradigm. The framework employs many design patterns to facilitate legacy code reuse, manage software complexity, and enhance the maintainability and portability of the framework. Three sample applications built a-top of this framework are illustrated to show the effectiveness of this framework: the first one is for volume image grouping and re-sampling, the second one is for 2D registration and fusion, and the last one is for visualization of single images as well as registered volume images.
NASA Astrophysics Data System (ADS)
Li, Nan; Zhu, Xiufang
2017-04-01
Cultivated land resources is the key to ensure food security. Timely and accurate access to cultivated land information is conducive to a scientific planning of food production and management policies. The GaoFen 1 (GF-1) images have high spatial resolution and abundant texture information and thus can be used to identify fragmentized cultivated land. In this paper, an object-oriented artificial bee colony algorithm was proposed for extracting cultivated land from GF-1 images. Firstly, the GF-1 image was segmented by eCognition software and some samples from the segments were manually identified into 2 types (cultivated land and non-cultivated land). Secondly, the artificial bee colony (ABC) algorithm was used to search for classification rules based on the spectral and texture information extracted from the image objects. Finally, the extracted classification rules were used to identify the cultivated land area on the image. The experiment was carried out in Hongze area, Jiangsu Province using wide field-of-view sensor on the GF-1 satellite image. The total precision of classification result was 94.95%, and the precision of cultivated land was 92.85%. The results show that the object-oriented ABC algorithm can overcome the defect of insufficient spectral information in GF-1 images and obtain high precision in cultivated identification.
Object-oriented recognition of high-resolution remote sensing image
NASA Astrophysics Data System (ADS)
Wang, Yongyan; Li, Haitao; Chen, Hong; Xu, Yuannan
2016-01-01
With the development of remote sensing imaging technology and the improvement of multi-source image's resolution in satellite visible light, multi-spectral and hyper spectral , the high resolution remote sensing image has been widely used in various fields, for example military field, surveying and mapping, geophysical prospecting, environment and so forth. In remote sensing image, the segmentation of ground targets, feature extraction and the technology of automatic recognition are the hotspot and difficulty in the research of modern information technology. This paper also presents an object-oriented remote sensing image scene classification method. The method is consist of vehicles typical objects classification generation, nonparametric density estimation theory, mean shift segmentation theory, multi-scale corner detection algorithm, local shape matching algorithm based on template. Remote sensing vehicles image classification software system is designed and implemented to meet the requirements .
Enhancing scattering images for orientation recovery with diffusion map
Winter, Martin; Saalmann, Ulf; Rost, Jan M.
2016-02-12
We explore the possibility for orientation recovery in single-molecule coherent diffractive imaging with diffusion map. This algorithm approximates the Laplace-Beltrami operator, which we diagonalize with a metric that corresponds to the mapping of Euler angles onto scattering images. While suitable for images of objects with specific properties we show why this approach fails for realistic molecules. Here, we introduce a modification of the form factor in the scattering images which facilitates the orientation recovery and should be suitable for all recovery algorithms based on the distance of individual images. (C) 2016 Optical Society of America
Tirado-Ramos, Alfredo; Hu, Jingkun; Lee, K P
2002-01-01
Supplement 23 to DICOM (Digital Imaging and Communications for Medicine), Structured Reporting, is a specification that supports a semantically rich representation of image and waveform content, enabling experts to share image and related patient information. DICOM SR supports the representation of textual and coded data linked to images and waveforms. Nevertheless, the medical information technology community needs models that work as bridges between the DICOM relational model and open object-oriented technologies. The authors assert that representations of the DICOM Structured Reporting standard, using object-oriented modeling languages such as the Unified Modeling Language, can provide a high-level reference view of the semantically rich framework of DICOM and its complex structures. They have produced an object-oriented model to represent the DICOM SR standard and have derived XML-exchangeable representations of this model using World Wide Web Consortium specifications. They expect the model to benefit developers and system architects who are interested in developing applications that are compliant with the DICOM SR specification.
Visualization: a tool for enhancing students' concept images of basic object-oriented concepts
NASA Astrophysics Data System (ADS)
Cetin, Ibrahim
2013-03-01
The purpose of this study was twofold: to investigate students' concept images about class, object, and their relationship and to help them enhance their learning of these notions with a visualization tool. Fifty-six second-year university students participated in the study. To investigate his/her concept images, the researcher developed a survey including open-ended questions, which was administered to the participants. Follow-up interviews with 12 randomly selected students were conducted to explore their answers to the survey in depth. The results of the first part of the research were utilized to construct visualization scenarios. The students used these scenarios to develop animations using Flash software. The study found that most of the students experienced difficulties in learning object-oriented notions. Overdependence on code-writing practice and examples and incorrectly learned analogies were determined to be the sources of their difficulties. Moreover, visualization was found to be a promising approach in facilitating students' concept images of basic object-oriented notions. The results of this study have implications for researchers and practitioners when designing programming instruction.
Measuring Positions of Objects using Two or More Cameras
NASA Technical Reports Server (NTRS)
Klinko, Steve; Lane, John; Nelson, Christopher
2008-01-01
An improved method of computing positions of objects from digitized images acquired by two or more cameras (see figure) has been developed for use in tracking debris shed by a spacecraft during and shortly after launch. The method is also readily adaptable to such applications as (1) tracking moving and possibly interacting objects in other settings in order to determine causes of accidents and (2) measuring positions of stationary objects, as in surveying. Images acquired by cameras fixed to the ground and/or cameras mounted on tracking telescopes can be used in this method. In this method, processing of image data starts with creation of detailed computer- aided design (CAD) models of the objects to be tracked. By rotating, translating, resizing, and overlaying the models with digitized camera images, parameters that characterize the position and orientation of the camera can be determined. The final position error depends on how well the centroids of the objects in the images are measured; how accurately the centroids are interpolated for synchronization of cameras; and how effectively matches are made to determine rotation, scaling, and translation parameters. The method involves use of the perspective camera model (also denoted the point camera model), which is one of several mathematical models developed over the years to represent the relationships between external coordinates of objects and the coordinates of the objects as they appear on the image plane in a camera. The method also involves extensive use of the affine camera model, in which the distance from the camera to an object (or to a small feature on an object) is assumed to be much greater than the size of the object (or feature), resulting in a truly two-dimensional image. The affine camera model does not require advance knowledge of the positions and orientations of the cameras. This is because ultimately, positions and orientations of the cameras and of all objects are computed in a coordinate system attached to one object as defined in its CAD model.
Principal axes estimation using the vibration modes of physics-based deformable models.
Krinidis, Stelios; Chatzis, Vassilios
2008-06-01
This paper addresses the issue of accurate, effective, computationally efficient, fast, and fully automated 2-D object orientation and scaling factor estimation. The object orientation is calculated using object principal axes estimation. The approach relies on the object's frequency-based features. The frequency-based features used by the proposed technique are extracted by a 2-D physics-based deformable model that parameterizes the objects shape. The method was evaluated on synthetic and real images. The experimental results demonstrate the accuracy of the method, both in orientation and the scaling estimations.
Some new classification methods for hyperspectral remote sensing
NASA Astrophysics Data System (ADS)
Du, Pei-jun; Chen, Yun-hao; Jones, Simon; Ferwerda, Jelle G.; Chen, Zhi-jun; Zhang, Hua-peng; Tan, Kun; Yin, Zuo-xia
2006-10-01
Hyperspectral Remote Sensing (HRS) is one of the most significant recent achievements of Earth Observation Technology. Classification is the most commonly employed processing methodology. In this paper three new hyperspectral RS image classification methods are analyzed. These methods are: Object-oriented FIRS image classification, HRS image classification based on information fusion and HSRS image classification by Back Propagation Neural Network (BPNN). OMIS FIRS image is used as the example data. Object-oriented techniques have gained popularity for RS image classification in recent years. In such method, image segmentation is used to extract the regions from the pixel information based on homogeneity criteria at first, and spectral parameters like mean vector, texture, NDVI and spatial/shape parameters like aspect ratio, convexity, solidity, roundness and orientation for each region are calculated, finally classification of the image using the region feature vectors and also using suitable classifiers such as artificial neural network (ANN). It proves that object-oriented methods can improve classification accuracy since they utilize information and features both from the point and the neighborhood, and the processing unit is a polygon (in which all pixels are homogeneous and belong to the class). HRS image classification based on information fusion, divides all bands of the image into different groups initially, and extracts features from every group according to the properties of each group. Three levels of information fusion: data level fusion, feature level fusion and decision level fusion are used to HRS image classification. Artificial Neural Network (ANN) can perform well in RS image classification. In order to promote the advances of ANN used for HIRS image classification, Back Propagation Neural Network (BPNN), the most commonly used neural network, is used to HRS image classification.
Objected-oriented remote sensing image classification method based on geographic ontology model
NASA Astrophysics Data System (ADS)
Chu, Z.; Liu, Z. J.; Gu, H. Y.
2016-11-01
Nowadays, with the development of high resolution remote sensing image and the wide application of laser point cloud data, proceeding objected-oriented remote sensing classification based on the characteristic knowledge of multi-source spatial data has been an important trend on the field of remote sensing image classification, which gradually replaced the traditional method through improving algorithm to optimize image classification results. For this purpose, the paper puts forward a remote sensing image classification method that uses the he characteristic knowledge of multi-source spatial data to build the geographic ontology semantic network model, and carries out the objected-oriented classification experiment to implement urban features classification, the experiment uses protégé software which is developed by Stanford University in the United States, and intelligent image analysis software—eCognition software as the experiment platform, uses hyperspectral image and Lidar data that is obtained through flight in DaFeng City of JiangSu as the main data source, first of all, the experiment uses hyperspectral image to obtain feature knowledge of remote sensing image and related special index, the second, the experiment uses Lidar data to generate nDSM(Normalized DSM, Normalized Digital Surface Model),obtaining elevation information, the last, the experiment bases image feature knowledge, special index and elevation information to build the geographic ontology semantic network model that implement urban features classification, the experiment results show that, this method is significantly higher than the traditional classification algorithm on classification accuracy, especially it performs more evidently on the respect of building classification. The method not only considers the advantage of multi-source spatial data, for example, remote sensing image, Lidar data and so on, but also realizes multi-source spatial data knowledge integration and application of the knowledge to the field of remote sensing image classification, which provides an effective way for objected-oriented remote sensing image classification in the future.
Mark Tracking: Position/orientation measurements using 4-circle mark and its tracking experiments
NASA Technical Reports Server (NTRS)
Kanda, Shinji; Okabayashi, Keijyu; Maruyama, Tsugito; Uchiyama, Takashi
1994-01-01
Future space robots require position and orientation tracking with visual feedback control to track and capture floating objects and satellites. We developed a four-circle mark that is useful for this purpose. With this mark, four geometric center positions as feature points can be extracted from the mark by simple image processing. We also developed a position and orientation measurement method that uses the four feature points in our mark. The mark gave good enough image measurement accuracy to let space robots approach and contact objects. A visual feedback control system using this mark enabled a robot arm to track a target object accurately. The control system was able to tolerate a time delay of 2 seconds.
Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei
2016-01-01
Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme. PMID:27362762
Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei
2016-01-01
Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme.
Object-orientated DBMS techniques for time-oriented medical record.
Pinciroli, F; Combi, C; Pozzi, G
1992-01-01
In implementing time-orientated medical record (TOMR) management systems, use of a relational model played a big role. Many applications have been developed to extend query and data manipulation languages to temporal aspects of information. Our experience in developing TOMR revealed some deficiencies inside the relational model, such as: (a) abstract data type definition; (b) unified view of data, at a programming level; (c) management of temporal data; (d) management of signals and images. We identified some first topics to face by an object-orientated approach to database design. This paper describes the first steps in designing and implementing a TOMR by an object-orientated DBMS.
NASA Astrophysics Data System (ADS)
Ren, B.; Wen, Q.; Zhou, H.; Guan, F.; Li, L.; Yu, H.; Wang, Z.
2018-04-01
The purpose of this paper is to provide decision support for the adjustment and optimization of crop planting structure in Jingxian County. The object-oriented information extraction method is used to extract corn and cotton from Jingxian County of Hengshui City in Hebei Province, based on multi-period GF-1 16-meter images. The best time of data extraction was screened by analyzing the spectral characteristics of corn and cotton at different growth stages based on multi-period GF-116-meter images, phenological data, and field survey data. The results showed that the total classification accuracy of corn and cotton was up to 95.7 %, the producer accuracy was 96 % and 94 % respectively, and the user precision was 95.05 % and 95.9 % respectively, which satisfied the demand of crop monitoring application. Therefore, combined with multi-period high-resolution images and object-oriented classification can be a good extraction of large-scale distribution of crop information for crop monitoring to provide convenient and effective technical means.
NASA Astrophysics Data System (ADS)
Chen, C.; Gong, W.; Hu, Y.; Chen, Y.; Ding, Y.
2017-05-01
The automated building detection in aerial images is a fundamental problem encountered in aerial and satellite images analysis. Recently, thanks to the advances in feature descriptions, Region-based CNN model (R-CNN) for object detection is receiving an increasing attention. Despite the excellent performance in object detection, it is problematic to directly leverage the features of R-CNN model for building detection in single aerial image. As we know, the single aerial image is in vertical view and the buildings possess significant directional feature. However, in R-CNN model, direction of the building is ignored and the detection results are represented by horizontal rectangles. For this reason, the detection results with horizontal rectangle cannot describe the building precisely. To address this problem, in this paper, we proposed a novel model with a key feature related to orientation, namely, Oriented R-CNN (OR-CNN). Our contributions are mainly in the following two aspects: 1) Introducing a new oriented layer network for detecting the rotation angle of building on the basis of the successful VGG-net R-CNN model; 2) the oriented rectangle is proposed to leverage the powerful R-CNN for remote-sensing building detection. In experiments, we establish a complete and bran-new data set for training our oriented R-CNN model and comprehensively evaluate the proposed method on a publicly available building detection data set. We demonstrate State-of-the-art results compared with the previous baseline methods.
NASA Astrophysics Data System (ADS)
Guo, H., II
2016-12-01
Spatial distribution information of mountainous area settlement place is of great significance to the earthquake emergency work because most of the key earthquake hazardous areas of china are located in the mountainous area. Remote sensing has the advantages of large coverage and low cost, it is an important way to obtain the spatial distribution information of mountainous area settlement place. At present, fully considering the geometric information, spectral information and texture information, most studies have applied object-oriented methods to extract settlement place information, In this article, semantic constraints is to be added on the basis of object-oriented methods. The experimental data is one scene remote sensing image of domestic high resolution satellite (simply as GF-1), with a resolution of 2 meters. The main processing consists of 3 steps, the first is pretreatment, including ortho rectification and image fusion, the second is Object oriented information extraction, including Image segmentation and information extraction, the last step is removing the error elements under semantic constraints, in order to formulate these semantic constraints, the distribution characteristics of mountainous area settlement place must be analyzed and the spatial logic relation between settlement place and other objects must be considered. The extraction accuracy calculation result shows that the extraction accuracy of object oriented method is 49% and rise up to 86% after the use of semantic constraints. As can be seen from the extraction accuracy, the extract method under semantic constraints can effectively improve the accuracy of mountainous area settlement place information extraction. The result shows that it is feasible to extract mountainous area settlement place information form GF-1 image, so the article proves that it has a certain practicality to use domestic high resolution optical remote sensing image in earthquake emergency preparedness.
Orientation of airborne laser scanning point clouds with multi-view, multi-scale image blocks.
Rönnholm, Petri; Hyyppä, Hannu; Hyyppä, Juha; Haggrén, Henrik
2009-01-01
Comprehensive 3D modeling of our environment requires integration of terrestrial and airborne data, which is collected, preferably, using laser scanning and photogrammetric methods. However, integration of these multi-source data requires accurate relative orientations. In this article, two methods for solving relative orientation problems are presented. The first method includes registration by minimizing the distances between of an airborne laser point cloud and a 3D model. The 3D model was derived from photogrammetric measurements and terrestrial laser scanning points. The first method was used as a reference and for validation. Having completed registration in the object space, the relative orientation between images and laser point cloud is known. The second method utilizes an interactive orientation method between a multi-scale image block and a laser point cloud. The multi-scale image block includes both aerial and terrestrial images. Experiments with the multi-scale image block revealed that the accuracy of a relative orientation increased when more images were included in the block. The orientations of the first and second methods were compared. The comparison showed that correct rotations were the most difficult to detect accurately by using the interactive method. Because the interactive method forces laser scanning data to fit with the images, inaccurate rotations cause corresponding shifts to image positions. However, in a test case, in which the orientation differences included only shifts, the interactive method could solve the relative orientation of an aerial image and airborne laser scanning data repeatedly within a couple of centimeters.
Orientation of Airborne Laser Scanning Point Clouds with Multi-View, Multi-Scale Image Blocks
Rönnholm, Petri; Hyyppä, Hannu; Hyyppä, Juha; Haggrén, Henrik
2009-01-01
Comprehensive 3D modeling of our environment requires integration of terrestrial and airborne data, which is collected, preferably, using laser scanning and photogrammetric methods. However, integration of these multi-source data requires accurate relative orientations. In this article, two methods for solving relative orientation problems are presented. The first method includes registration by minimizing the distances between of an airborne laser point cloud and a 3D model. The 3D model was derived from photogrammetric measurements and terrestrial laser scanning points. The first method was used as a reference and for validation. Having completed registration in the object space, the relative orientation between images and laser point cloud is known. The second method utilizes an interactive orientation method between a multi-scale image block and a laser point cloud. The multi-scale image block includes both aerial and terrestrial images. Experiments with the multi-scale image block revealed that the accuracy of a relative orientation increased when more images were included in the block. The orientations of the first and second methods were compared. The comparison showed that correct rotations were the most difficult to detect accurately by using the interactive method. Because the interactive method forces laser scanning data to fit with the images, inaccurate rotations cause corresponding shifts to image positions. However, in a test case, in which the orientation differences included only shifts, the interactive method could solve the relative orientation of an aerial image and airborne laser scanning data repeatedly within a couple of centimeters. PMID:22454569
Visualization: A Tool for Enhancing Students' Concept Images of Basic Object-Oriented Concepts
ERIC Educational Resources Information Center
Cetin, Ibrahim
2013-01-01
The purpose of this study was twofold: to investigate students' concept images about class, object, and their relationship and to help them enhance their learning of these notions with a visualization tool. Fifty-six second-year university students participated in the study. To investigate his/her concept images, the researcher developed a survey…
Science Objectives for a Soft X-ray Mission
NASA Astrophysics Data System (ADS)
Sibeck, D. G.; Connor, H. K.; Collier, M. R.; Collado-Vega, Y. M.; Walsh, B.
2016-12-01
When high charge state solar wind ions exchange electrons with exospheric neutrals, soft X-rays are emitted. In conjunction with flight- proven wide field-of-view soft X-ray imagers employing lobster-eye optics, recent simulations demonstrate the feasibility of imaging magnetospheric density structures such as the bow shock, magnetopause, and cusps. This presentation examines the Heliospheric scientific objectives that such imagers can address. Principal amongst these is the nature of reconnection at the dayside magnetopause: steady or transient, widespread or localized, component or antiparallel as a function of solar wind conditions. However, amongst many other objectives, soft X-ray imagers can provide crucial information concerning the structure of the bow shock as a function of solar wind Mach number and IMF orientation, the presence or absence of a depletion layer, the occurrence of Kelvin-Helmholtz or pressure-pulse driven magnetopause boundary waves, and the effects of radial IMF orientations and the foreshock upon bow shock and magnetopause location.
NASA Astrophysics Data System (ADS)
Petrochenko, Andrey; Konyakhin, Igor
2017-06-01
In connection with the development of robotics have become increasingly popular variety of three-dimensional reconstruction of the system mapping and image-set received from the optical sensors. The main objective of technical and robot vision is the detection, tracking and classification of objects of the space in which these systems and robots operate [15,16,18]. Two-dimensional images sometimes don't contain sufficient information to address those or other problems: the construction of the map of the surrounding area for a route; object identification, tracking their relative position and movement; selection of objects and their attributes to complement the knowledge base. Three-dimensional reconstruction of the surrounding space allows you to obtain information on the relative positions of objects, their shape, surface texture. Systems, providing training on the basis of three-dimensional reconstruction of the results of the comparison can produce two-dimensional images of three-dimensional model that allows for the recognition of volume objects on flat images. The problem of the relative orientation of industrial robots with the ability to build threedimensional scenes of controlled surfaces is becoming actual nowadays.
Visual Object Recognition and Tracking of Tools
NASA Technical Reports Server (NTRS)
English, James; Chang, Chu-Yin; Tardella, Neil
2011-01-01
A method has been created to automatically build an algorithm off-line, using computer-aided design (CAD) models, and to apply this at runtime. The object type is discriminated, and the position and orientation are identified. This system can work with a single image and can provide improved performance using multiple images provided from videos. The spatial processing unit uses three stages: (1) segmentation; (2) initial type, pose, and geometry (ITPG) estimation; and (3) refined type, pose, and geometry (RTPG) calculation. The image segmentation module files all the tools in an image and isolates them from the background. For this, the system uses edge-detection and thresholding to find the pixels that are part of a tool. After the pixels are identified, nearby pixels are grouped into blobs. These blobs represent the potential tools in the image and are the product of the segmentation algorithm. The second module uses matched filtering (or template matching). This approach is used for condensing synthetic images using an image subspace that captures key information. Three degrees of orientation, three degrees of position, and any number of degrees of freedom in geometry change are included. To do this, a template-matching framework is applied. This framework uses an off-line system for calculating template images, measurement images, and the measurements of the template images. These results are used online to match segmented tools against the templates. The final module is the RTPG processor. Its role is to find the exact states of the tools given initial conditions provided by the ITPG module. The requirement that the initial conditions exist allows this module to make use of a local search (whereas the ITPG module had global scope). To perform the local search, 3D model matching is used, where a synthetic image of the object is created and compared to the sensed data. The availability of low-cost PC graphics hardware allows rapid creation of synthetic images. In this approach, a function of orientation, distance, and articulation is defined as a metric on the difference between the captured image and a synthetic image with an object in the given orientation, distance, and articulation. The synthetic image is created using a model that is looked up in an object-model database. A composable software architecture is used for implementation. Video is first preprocessed to remove sensor anomalies (like dead pixels), and then is processed sequentially by a prioritized list of tracker-identifiers.
1985-08-01
in a. typography system, the surface of a. ship hull, or the skin of a.n airplane. To define objects such as these, higher order curve a.nd surface...rate). Thus, a parametrization contains infor- mation about the geometry (the shape or image of the curve), the orientation, and the rate. Figure 2.3...2.3. Each of the curves above has the same image ; they only differ in orientation and rate. Orientation is indicated by arrowheads and rate is
[An object-based information extraction technology for dominant tree species group types].
Tian, Tian; Fan, Wen-yi; Lu, Wei; Xiao, Xiang
2015-06-01
Information extraction for dominant tree group types is difficult in remote sensing image classification, howevers, the object-oriented classification method using high spatial resolution remote sensing data is a new method to realize the accurate type information extraction. In this paper, taking the Jiangle Forest Farm in Fujian Province as the research area, based on the Quickbird image data in 2013, the object-oriented method was adopted to identify the farmland, shrub-herbaceous plant, young afforested land, Pinus massoniana, Cunninghamia lanceolata and broad-leave tree types. Three types of classification factors including spectral, texture, and different vegetation indices were used to establish a class hierarchy. According to the different levels, membership functions and the decision tree classification rules were adopted. The results showed that the method based on the object-oriented method by using texture, spectrum and the vegetation indices achieved the classification accuracy of 91.3%, which was increased by 5.7% compared with that by only using the texture and spectrum.
ERIC Educational Resources Information Center
Austerweil, Joseph L.; Griffiths, Thomas L.; Palmer, Stephen E.
2017-01-01
How does the visual system recognize images of a novel object after a single observation despite possible variations in the viewpoint of that object relative to the observer? One possibility is comparing the image with a prototype for invariance over a relevant transformation set (e.g., translations and dilations). However, invariance over…
System and method for object localization
NASA Technical Reports Server (NTRS)
Kelly, Alonzo J. (Inventor); Zhong, Yu (Inventor)
2005-01-01
A computer-assisted method for localizing a rack, including sensing an image of the rack, detecting line segments in the sensed image, recognizing a candidate arrangement of line segments in the sensed image indicative of a predetermined feature of the rack, generating a matrix of correspondence between the candidate arrangement of line segments and an expected position and orientation of the predetermined feature of the rack, and estimating a position and orientation of the rack based on the matrix of correspondence.
Hybrid region merging method for segmentation of high-resolution remote sensing images
NASA Astrophysics Data System (ADS)
Zhang, Xueliang; Xiao, Pengfeng; Feng, Xuezhi; Wang, Jiangeng; Wang, Zuo
2014-12-01
Image segmentation remains a challenging problem for object-based image analysis. In this paper, a hybrid region merging (HRM) method is proposed to segment high-resolution remote sensing images. HRM integrates the advantages of global-oriented and local-oriented region merging strategies into a unified framework. The globally most-similar pair of regions is used to determine the starting point of a growing region, which provides an elegant way to avoid the problem of starting point assignment and to enhance the optimization ability for local-oriented region merging. During the region growing procedure, the merging iterations are constrained within the local vicinity, so that the segmentation is accelerated and can reflect the local context, as compared with the global-oriented method. A set of high-resolution remote sensing images is used to test the effectiveness of the HRM method, and three region-based remote sensing image segmentation methods are adopted for comparison, including the hierarchical stepwise optimization (HSWO) method, the local-mutual best region merging (LMM) method, and the multiresolution segmentation (MRS) method embedded in eCognition Developer software. Both the supervised evaluation and visual assessment show that HRM performs better than HSWO and LMM by combining both their advantages. The segmentation results of HRM and MRS are visually comparable, but HRM can describe objects as single regions better than MRS, and the supervised and unsupervised evaluation results further prove the superiority of HRM.
Horváth, Gábor; Buchta, Krisztián; Varjú, Dezsö
2003-06-01
It is a well-known phenomenon that when we look into the water with two aerial eyes, both the apparent position and the apparent shape of underwater objects are different from the real ones because of refraction at the water surface. Earlier studies of the refraction-distorted structure of the underwater binocular visual field of aerial observers were restricted to either vertically or horizontally oriented eyes. We investigate a generalized version of this problem: We calculate the position of the binocular image point of an underwater object point viewed by two arbitrarily positioned aerial eyes, including oblique orientations of the eyes relative to the flat water surface. Assuming that binocular image fusion is performed by appropriate vergent eye movements to bring the object's image onto the foveas, the structure of the underwater binocular visual field is computed and visualized in different ways as a function of the relative positions of the eyes. We show that a revision of certain earlier treatments of the aerial imaging of underwater objects is necessary. We analyze and correct some widespread erroneous or incomplete representations of this classical geometric optical problem that occur in different textbooks. Improving the theory of aerial binocular imaging of underwater objects, we demonstrate that the structure of the underwater binocular visual field of aerial observers distorted by refraction is more complex than has been thought previously.
Hierarchical image feature extraction by an irregular pyramid of polygonal partitions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skurikhin, Alexei N
2008-01-01
We present an algorithmic framework for hierarchical image segmentation and feature extraction. We build a successive fine-to-coarse hierarchy of irregular polygonal partitions of the original image. This multiscale hierarchy forms the basis for object-oriented image analysis. The framework incorporates the Gestalt principles of visual perception, such as proximity and closure, and exploits spectral and textural similarities of polygonal partitions, while iteratively grouping them until dissimilarity criteria are exceeded. Seed polygons are built upon a triangular mesh composed of irregular sized triangles, whose spatial arrangement is adapted to the image content. This is achieved by building the triangular mesh on themore » top of detected spectral discontinuities (such as edges), which form a network of constraints for the Delaunay triangulation. The image is then represented as a spatial network in the form of a graph with vertices corresponding to the polygonal partitions and edges reflecting their relations. The iterative agglomeration of partitions into object-oriented segments is formulated as Minimum Spanning Tree (MST) construction. An important characteristic of the approach is that the agglomeration of polygonal partitions is constrained by the detected edges; thus the shapes of agglomerated partitions are more likely to correspond to the outlines of real-world objects. The constructed partitions and their spatial relations are characterized using spectral, textural and structural features based on proximity graphs. The framework allows searching for object-oriented features of interest across multiple levels of details of the built hierarchy and can be generalized to the multi-criteria MST to account for multiple criteria important for an application.« less
Estimation of 3D shape from image orientations.
Fleming, Roland W; Holtmann-Rice, Daniel; Bülthoff, Heinrich H
2011-12-20
One of the main functions of vision is to estimate the 3D shape of objects in our environment. Many different visual cues, such as stereopsis, motion parallax, and shading, are thought to be involved. One important cue that remains poorly understood comes from surface texture markings. When a textured surface is slanted in 3D relative to the observer, the surface patterns appear compressed in the retinal image, providing potentially important information about 3D shape. What is not known, however, is how the brain actually measures this information from the retinal image. Here, we explain how the key information could be extracted by populations of cells tuned to different orientations and spatial frequencies, like those found in the primary visual cortex. To test this theory, we created stimuli that selectively stimulate such cell populations, by "smearing" (filtering) images of 2D random noise into specific oriented patterns. We find that the resulting patterns appear vividly 3D, and that increasing the strength of the orientation signals progressively increases the sense of 3D shape, even though the filtering we apply is physically inconsistent with what would occur with a real object. This finding suggests we have isolated key mechanisms used by the brain to estimate shape from texture. Crucially, we also find that adapting the visual system's orientation detectors to orthogonal patterns causes unoriented random noise to look like a specific 3D shape. Together these findings demonstrate a crucial role of orientation detectors in the perception of 3D shape.
Decision Tree Repository and Rule Set Based Mingjiang River Estuarine Wetlands Classifaction
NASA Astrophysics Data System (ADS)
Zhang, W.; Li, X.; Xiao, W.
2018-05-01
The increasing urbanization and industrialization have led to wetland losses in estuarine area of Mingjiang River over past three decades. There has been increasing attention given to produce wetland inventories using remote sensing and GIS technology. Due to inconsistency training site and training sample, traditionally pixel-based image classification methods can't achieve a comparable result within different organizations. Meanwhile, object-oriented image classification technique shows grate potential to solve this problem and Landsat moderate resolution remote sensing images are widely used to fulfill this requirement. Firstly, the standardized atmospheric correct, spectrally high fidelity texture feature enhancement was conducted before implementing the object-oriented wetland classification method in eCognition. Secondly, we performed the multi-scale segmentation procedure, taking the scale, hue, shape, compactness and smoothness of the image into account to get the appropriate parameters, using the top and down region merge algorithm from single pixel level, the optimal texture segmentation scale for different types of features is confirmed. Then, the segmented object is used as the classification unit to calculate the spectral information such as Mean value, Maximum value, Minimum value, Brightness value and the Normalized value. The Area, length, Tightness and the Shape rule of the image object Spatial features and texture features such as Mean, Variance and Entropy of image objects are used as classification features of training samples. Based on the reference images and the sampling points of on-the-spot investigation, typical training samples are selected uniformly and randomly for each type of ground objects. The spectral, texture and spatial characteristics of each type of feature in each feature layer corresponding to the range of values are used to create the decision tree repository. Finally, with the help of high resolution reference images, the random sampling method is used to conduct the field investigation, achieve an overall accuracy of 90.31 %, and the Kappa coefficient is 0.88. The classification method based on decision tree threshold values and rule set developed by the repository, outperforms the results obtained from the traditional methodology. Our decision tree repository and rule set based object-oriented classification technique was an effective method for producing comparable and consistency wetlands data set.
NASA Technical Reports Server (NTRS)
Hill, Michael A.; Haering, Edward A., Jr.
2017-01-01
The Background Oriented Schlieren using Celestial Objects series of flights was undertaken in the spring of 2016 at National Aeronautics and Space Administration Armstrong Flight Research Center to further develop and improve a flow visualization technique which can be performed from the ground upon flying aircraft. Improved hardware and imaging techniques from previous schlieren tests were investigated. A United States Air Force T-38C and NASA B200 King Air aircraft were imaged eclipsing the sun at ranges varying from 2 to 6 nautical miles, at subsonic and supersonic speeds.
The Visual Representation of 3D Object Orientation in Parietal Cortex
Cowan, Noah J.; Angelaki, Dora E.
2013-01-01
An accurate representation of three-dimensional (3D) object orientation is essential for interacting with the environment. Where and how the brain visually encodes 3D object orientation remains unknown, but prior studies suggest the caudal intraparietal area (CIP) may be involved. Here, we develop rigorous analytical methods for quantifying 3D orientation tuning curves, and use these tools to the study the neural coding of surface orientation. Specifically, we show that single neurons in area CIP of the rhesus macaque jointly encode the slant and tilt of a planar surface, and that across the population, the distribution of preferred slant-tilts is not statistically different from uniform. This suggests that all slant-tilt combinations are equally represented in area CIP. Furthermore, some CIP neurons are found to also represent the third rotational degree of freedom that determines the orientation of the image pattern on the planar surface. Together, the present results suggest that CIP is a critical neural locus for the encoding of all three rotational degrees of freedom specifying an object's 3D spatial orientation. PMID:24305830
Retrieving high-resolution images over the Internet from an anatomical image database
NASA Astrophysics Data System (ADS)
Strupp-Adams, Annette; Henderson, Earl
1999-12-01
The Visible Human Data set is an important contribution to the national collection of anatomical images. To enhance the availability of these images, the National Library of Medicine has supported the design and development of a prototype object-oriented image database which imports, stores, and distributes high resolution anatomical images in both pixel and voxel formats. One of the key database modules is its client-server Internet interface. This Web interface provides a query engine with retrieval access to high-resolution anatomical images that range in size from 100KB for browser viewable rendered images, to 1GB for anatomical structures in voxel file formats. The Web query and retrieval client-server system is composed of applet GUIs, servlets, and RMI application modules which communicate with each other to allow users to query for specific anatomical structures, and retrieve image data as well as associated anatomical images from the database. Selected images can be downloaded individually as single files via HTTP or downloaded in batch-mode over the Internet to the user's machine through an applet that uses Netscape's Object Signing mechanism. The image database uses ObjectDesign's object-oriented DBMS, ObjectStore that has a Java interface. The query and retrieval systems has been tested with a Java-CDE window system, and on the x86 architecture using Windows NT 4.0. This paper describes the Java applet client search engine that queries the database; the Java client module that enables users to view anatomical images online; the Java application server interface to the database which organizes data returned to the user, and its distribution engine that allow users to download image files individually and/or in batch-mode.
NASA Astrophysics Data System (ADS)
Chenari, A.; Erfanifard, Y.; Dehghani, M.; Pourghasemi, H. R.
2017-09-01
Remotely sensed datasets offer a reliable means to precisely estimate biophysical characteristics of individual species sparsely distributed in open woodlands. Moreover, object-oriented classification has exhibited significant advantages over different classification methods for delineation of tree crowns and recognition of species in various types of ecosystems. However, it still is unclear if this widely-used classification method can have its advantages on unmanned aerial vehicle (UAV) digital images for mapping vegetation cover at single-tree levels. In this study, UAV orthoimagery was classified using object-oriented classification method for mapping a part of wild pistachio nature reserve in Zagros open woodlands, Fars Province, Iran. This research focused on recognizing two main species of the study area (i.e., wild pistachio and wild almond) and estimating their mean crown area. The orthoimage of study area was consisted of 1,076 images with spatial resolution of 3.47 cm which was georeferenced using 12 ground control points (RMSE=8 cm) gathered by real-time kinematic (RTK) method. The results showed that the UAV orthoimagery classified by object-oriented method efficiently estimated mean crown area of wild pistachios (52.09±24.67 m2) and wild almonds (3.97±1.69 m2) with no significant difference with their observed values (α=0.05). In addition, the results showed that wild pistachios (accuracy of 0.90 and precision of 0.92) and wild almonds (accuracy of 0.90 and precision of 0.89) were well recognized by image segmentation. In general, we concluded that UAV orthoimagery can efficiently produce precise biophysical data of vegetation stands at single-tree levels, which therefore is suitable for assessment and monitoring open woodlands.
NASA Astrophysics Data System (ADS)
Feng, Judy J.; Ip, Horace H.; Cheng, Shuk H.
2004-05-01
Many grey-level thresholding methods based on histogram or other statistic information about the interest image such as maximum entropy and so on have been proposed in the past. However, most methods based on statistic analysis of the images concerned little about the characteristics of morphology of interest objects, which sometimes could provide very important indication which can help to find the optimum threshold, especially for those organisms which have special texture morphologies such as vasculature, neuro-network etc. in medical imaging. In this paper, we propose a novel method for thresholding the fluorescent vasculature image series recorded from Confocal Scanning Laser Microscope. After extracting the basic orientation of the slice of vessels inside a sub-region partitioned from the images, we analysis the intensity profiles perpendicular to the vessel orientation to get the reasonable initial threshold for each region. Then the threshold values of those regions near the interest one both in x-y and optical directions have been referenced to get the final result of thresholds of the region, which makes the whole stack of images look more continuous. The resulting images are characterized by suppressing both noise and non-interest tissues conglutinated to vessels, while improving the vessel connectivities and edge definitions. The value of the method for idealized thresholding the fluorescence images of biological objects is demonstrated by a comparison of the results of 3D vascular reconstruction.
Sugarcane Crop Extraction Using Object-Oriented Method from ZY-3 High Resolution Satellite Tlc Image
NASA Astrophysics Data System (ADS)
Luo, H.; Ling, Z. Y.; Shao, G. Z.; Huang, Y.; He, Y. Q.; Ning, W. Y.; Zhong, Z.
2018-04-01
Sugarcane is one of the most important crops in Guangxi, China. As the development of satellite remote sensing technology, more remotely sensed images can be used for monitoring sugarcane crop. With the help of Three Line Camera (TLC) images, wide coverage and stereoscopic mapping ability, Chinese ZY-3 high resolution stereoscopic mapping satellite is useful in attaining more information for sugarcane crop monitoring, such as spectral, shape, texture difference between forward, nadir and backward images. Digital surface model (DSM) derived from ZY-3 TLC images are also able to provide height information for sugarcane crop. In this study, we make attempt to extract sugarcane crop from ZY-3 images, which are acquired in harvest period. Ortho-rectified TLC images, fused image, DSM are processed for our extraction. Then Object-oriented method is used in image segmentation, example collection, and feature extraction. The results of our study show that with the help of ZY-3 TLC image, the information of sugarcane crop in harvest time can be automatic extracted, with an overall accuracy of about 85.3 %.
Biologically Inspired Model for Inference of 3D Shape from Texture
Gomez, Olman; Neumann, Heiko
2016-01-01
A biologically inspired model architecture for inferring 3D shape from texture is proposed. The model is hierarchically organized into modules roughly corresponding to visual cortical areas in the ventral stream. Initial orientation selective filtering decomposes the input into low-level orientation and spatial frequency representations. Grouping of spatially anisotropic orientation responses builds sketch-like representations of surface shape. Gradients in orientation fields and subsequent integration infers local surface geometry and globally consistent 3D depth. From the distributions in orientation responses summed in frequency, an estimate of the tilt and slant of the local surface can be obtained. The model suggests how 3D shape can be inferred from texture patterns and their image appearance in a hierarchically organized processing cascade along the cortical ventral stream. The proposed model integrates oriented texture gradient information that is encoded in distributed maps of orientation-frequency representations. The texture energy gradient information is defined by changes in the grouped summed normalized orientation-frequency response activity extracted from the textured object image. This activity is integrated by directed fields to generate a 3D shape representation of a complex object with depth ordering proportional to the fields output, with higher activity denoting larger distance in relative depth away from the viewer. PMID:27649387
Information mining in remote sensing imagery
NASA Astrophysics Data System (ADS)
Li, Jiang
The volume of remotely sensed imagery continues to grow at an enormous rate due to the advances in sensor technology, and our capability for collecting and storing images has greatly outpaced our ability to analyze and retrieve information from the images. This motivates us to develop image information mining techniques, which is very much an interdisciplinary endeavor drawing upon expertise in image processing, databases, information retrieval, machine learning, and software design. This dissertation proposes and implements an extensive remote sensing image information mining (ReSIM) system prototype for mining useful information implicitly stored in remote sensing imagery. The system consists of three modules: image processing subsystem, database subsystem, and visualization and graphical user interface (GUI) subsystem. Land cover and land use (LCLU) information corresponding to spectral characteristics is identified by supervised classification based on support vector machines (SVM) with automatic model selection, while textural features that characterize spatial information are extracted using Gabor wavelet coefficients. Within LCLU categories, textural features are clustered using an optimized k-means clustering approach to acquire search efficient space. The clusters are stored in an object-oriented database (OODB) with associated images indexed in an image database (IDB). A k-nearest neighbor search is performed using a query-by-example (QBE) approach. Furthermore, an automatic parametric contour tracing algorithm and an O(n) time piecewise linear polygonal approximation (PLPA) algorithm are developed for shape information mining of interesting objects within the image. A fuzzy object-oriented database based on the fuzzy object-oriented data (FOOD) model is developed to handle the fuzziness and uncertainty. Three specific applications are presented: integrated land cover and texture pattern mining, shape information mining for change detection of lakes, and fuzzy normalized difference vegetation index (NDVI) pattern mining. The study results show the effectiveness of the proposed system prototype and the potentials for other applications in remote sensing.
Effect of planar cuts' orientation on the perceived surface layout and object's shape.
Bocheva, Nadejda
2009-07-01
The effect of the orientation of the cutting planes producing planar curves over the surface of an object on its perceived pose and shape was investigated for line drawings representing three-dimensional objects. The results suggest that the orientational flow produced by the surface curves introduces an apparent object rotation in depth and in the image plane and changes in its perceived elongation. The apparent location of the nearest points is determined by the points of maximal view-dependent unsigned curvature of the surface curves. The data are discussed in relation to the interaction of the shape-from-silhouette system and shape-from-contour system and its effect on the interpretation of the surface contours with respect to the surface geometry.
NASA Astrophysics Data System (ADS)
Sventek, Joe
1998-12-01
Hewlett-Packard Laboratories, 1501 Page Mill Road, Palo Alto, CA 94304, USA Introduction The USENIX Conference on Object-Oriented Technologies and Systems (COOTS) is held annually in the late spring. The conference evolved from a set of C++ workshops that were held under the auspices of USENIX, the first of which met in 1989. Given the growing diverse interest in object-oriented technologies, the C++ focus of the workshop eventually became too narrow, with the result that the scope was widened in 1995 to include object-oriented technologies and systems. COOTS is intended to showcase advanced R&D efforts in object-oriented technologies and software systems. The conference emphasizes experimental research and experience gained by using object-oriented techniques and languages to build complex software systems that meet real-world needs. COOTS solicits papers in the following general areas: application of, and experiences with, object-oriented technologies in particular domains (e.g. financial, medical, telecommunication); the architecture and implementation of distributed object systems (e.g. CORBA, DCOM, RMI); object-oriented programming and specification languages; object-oriented design and analysis. The 4th meeting of COOTS was held 27 - 30 April 1998 at the El Dorado Hotel, Santa Fe, New Mexico, USA. Several tutorials were given. The technical program proper consisted of a single track of six sessions, with three paper presentations per session. A keynote address and a provocative panel session rounded out the technical program. The program committee reviewed 56 papers, selecting the best 18 for presentation in the technical sessions. While we solicit papers across the spectrum of applications of object-oriented technologies, this year there was a predominance of distributed, object-oriented papers. The accepted papers reflected this asymmetry, with 15 papers on distributed objects and 3 papers on object-oriented languages. The papers in this special issue are the six best distributed object papers (in the opinion of the program committee). They represent the diversity of research in this particular area, and should give the reader a good idea of the types of papers presented at COOTS as well as the calibre of the work so presented. The papers The paper by Jain, Widoff and Schmidt explores the suitability of Java for writing performance-sensitive distributed applications. Despite the popularity of Java, there are many concerns about its efficiency; in particular, networking and computation performance are key concerns when considering the use of Java to develop performance-sensitive distributed applications. This paper makes three contributions to the study of Java for these applications: it describes an architecture using Java and the Web to develop MedJava, which is a distributed electronic medical imaging system with stringent networking and computation requirements; it presents benchmarks of MedJava image processing and compares the results to the performance of xv, which is an equivalent image processing application written in C; it presents performance benchmarks using Java as a transport interface to exchange large medical images over high-speed ATM networks. The paper by Little and Shrivastava covers the integration of several important topics: transactions, distributed systems, Java, the Internet and security. The usefulness of this paper lies in the synthesis of an effective solution applying work in different areas of computing to the Java environment. Securing applications constructed from distributed objects is important if these applications are to be used in mission-critical situations. Delegation is one aspect of distributed system security that is necessary for such applications. The paper by Nagaratnam and Lea describes a secure delegation model for Java-based, distributed object environments. The paper by Frølund and Koistinen addresses the topical issue of providing a common way for describing Quality-of-Service (QoS) features in distributed, object-oriented systems. They present a general QoS language, QML, that can be used to capture QoS properties as part of a design. They also show how to extend UML to support QML concepts. The paper by Szymaszek, Uszok and Zielinski discusses the important issue of efficient implementation and usage of fine-grained objects in CORBA-based applications. Fine-grained objects can have serious ramifications on overall application performance and scalability, and the paper suggests that such objects should not be treated as first-class CORBA objects, proposing instead the use of collections and smart proxies for efficient implementation. The paper by Milojicic, LaForge and Chauhan describes a mobile objects and agents infrastructure. Their particular research has focused on communication support across agent migration and extensive resource control. The paper also discusses issues regarding interoperation between agent systems. Acknowledgments The editor wishes to thank all of the authors, reviewers and publishers. Without their excellent work, and the contribution of their valuable time, this special issue would not have been possible.
Almeida, Andréa Sobral de; Werneck, Guilherme Loureiro; Resendes, Ana Paula da Costa
2014-08-01
This study explored the use of object-oriented classification of remote sensing imagery in epidemiological studies of visceral leishmaniasis (VL) in urban areas. To obtain temperature and environmental information, an object-oriented classification approach was applied to Landsat 5 TM scenes from the city of Teresina, Piauí State, Brazil. For 1993-1996, VL incidence rates correlated positively with census tracts covered by dense vegetation, grass/pasture, and bare soil and negatively with areas covered by water and densely populated areas. In 2001-2006, positive correlations were found with dense vegetation, grass/pasture, bare soil, and densely populated areas and negative correlations with occupied urban areas with some vegetation. Land surface temperature correlated negatively with VL incidence in both periods. Object-oriented classification can be useful to characterize landscape features associated with VL in urban areas and to help identify risk areas in order to prioritize interventions.
NASA Astrophysics Data System (ADS)
Borodinov, A. A.; Myasnikov, V. V.
2018-04-01
The present work is devoted to comparing the accuracy of the known qualification algorithms in the task of recognizing local objects on radar images for various image preprocessing methods. Preprocessing involves speckle noise filtering and normalization of the object orientation in the image by the method of image moments and by a method based on the Hough transform. In comparison, the following classification algorithms are used: Decision tree; Support vector machine, AdaBoost, Random forest. The principal component analysis is used to reduce the dimension. The research is carried out on the objects from the base of radar images MSTAR. The paper presents the results of the conducted studies.
A cultural side effect: learning to read interferes with identity processing of familiar objects
Kolinsky, Régine; Fernandes, Tânia
2014-01-01
Based on the neuronal recycling hypothesis (Dehaene and Cohen, 2007), we examined whether reading acquisition has a cost for the recognition of non-linguistic visual materials. More specifically, we checked whether the ability to discriminate between mirror images, which develops through literacy acquisition, interferes with object identity judgments, and whether interference strength varies as a function of the nature of the non-linguistic material. To these aims we presented illiterate, late literate (who learned to read at adult age), and early literate adults with an orientation-independent, identity-based same-different comparison task in which they had to respond “same” to both physically identical and mirrored or plane-rotated images of pictures of familiar objects (Experiment 1) or of geometric shapes (Experiment 2). Interference from irrelevant orientation variations was stronger with plane rotations than with mirror images, and stronger with geometric shapes than with objects. Illiterates were the only participants almost immune to mirror variations, but only for familiar objects. Thus, the process of unlearning mirror-image generalization, necessary to acquire literacy in the Latin alphabet, has a cost for a basic function of the visual ventral object recognition stream, i.e., identification of familiar objects. This demonstrates that neural recycling is not just an adaptation to multi-use but a process of at least partial exaptation. PMID:25400605
A Semi-Automatic Image-Based Close Range 3D Modeling Pipeline Using a Multi-Camera Configuration
Rau, Jiann-Yeou; Yeh, Po-Chia
2012-01-01
The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum. PMID:23112656
A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.
Rau, Jiann-Yeou; Yeh, Po-Chia
2012-01-01
The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.
Statistical Image Properties in Works from the Prinzhorn Collection of Artists with Schizophrenia
Henemann, Gudrun Maria; Brachmann, Anselm; Redies, Christoph
2017-01-01
The Prinzhorn Collection preserves and exhibits thousands of visual artworks by patients who were diagnosed to suffer from mental disease. From this collection, we analyzed 1,256 images by 14 artists who were diagnosed with dementia praecox or schizophrenia. Six objective statistical properties that have been used previously to characterize visually aesthetic images were calculated. These properties reflect features of formal image composition, such as the complexity and distribution of oriented luminance gradients and edges, as well as Fourier spectral properties. Results for the artists with schizophrenia were compared to artworks from three public art collections of paintings and drawings that include highly acclaimed artworks as well as artworks of lesser artistic claim (control artworks). Many of the patients’ works did not differ from these control images. However, the artworks of 6 of the 14 artists with schizophrenia possess image properties that deviate from the range of values obtained for the control artworks. For example, the artworks of four of the patients are characterized by a relative dominance of specific edge orientations in their images (low first-order entropy of edge orientations). Three patients created artworks with a relatively high ratio of fine detail to coarse structure (high slope of the Fourier spectrum). In conclusion, the present exploratory study opens novel perspectives for the objective scientific investigation of visual artworks that were created by persons who suffer from schizophrenia. PMID:29312011
Real-time model-based vision system for object acquisition and tracking
NASA Technical Reports Server (NTRS)
Wilcox, Brian; Gennery, Donald B.; Bon, Bruce; Litwin, Todd
1987-01-01
A machine vision system is described which is designed to acquire and track polyhedral objects moving and rotating in space by means of two or more cameras, programmable image-processing hardware, and a general-purpose computer for high-level functions. The image-processing hardware is capable of performing a large variety of operations on images and on image-like arrays of data. Acquisition utilizes image locations and velocities of the features extracted by the image-processing hardware to determine the three-dimensional position, orientation, velocity, and angular velocity of the object. Tracking correlates edges detected in the current image with edge locations predicted from an internal model of the object and its motion, continually updating velocity information to predict where edges should appear in future frames. With some 10 frames processed per second, real-time tracking is possible.
Myint, S.W.; Yuan, M.; Cerveny, R.S.; Giri, C.P.
2008-01-01
Remote sensing techniques have been shown effective for large-scale damage surveys after a hazardous event in both near real-time or post-event analyses. The paper aims to compare accuracy of common imaging processing techniques to detect tornado damage tracks from Landsat TM data. We employed the direct change detection approach using two sets of images acquired before and after the tornado event to produce a principal component composite images and a set of image difference bands. Techniques in the comparison include supervised classification, unsupervised classification, and objectoriented classification approach with a nearest neighbor classifier. Accuracy assessment is based on Kappa coefficient calculated from error matrices which cross tabulate correctly identified cells on the TM image and commission and omission errors in the result. Overall, the Object-oriented Approach exhibits the highest degree of accuracy in tornado damage detection. PCA and Image Differencing methods show comparable outcomes. While selected PCs can improve detection accuracy 5 to 10%, the Object-oriented Approach performs significantly better with 15-20% higher accuracy than the other two techniques. ?? 2008 by MDPI.
Two-dimensional shape recognition using oriented-polar representation
NASA Astrophysics Data System (ADS)
Hu, Neng-Chung; Yu, Kuo-Kan; Hsu, Yung-Li
1997-10-01
To deal with such a problem as object recognition of position, scale, and rotation invariance (PSRI), we utilize some PSRI properties of images obtained from objects, for example, the centroid of the image. The corresponding position of the centroid to the boundary of the image is invariant in spite of rotation, scale, and translation of the image. To obtain the information of the image, we use the technique similar to Radon transform, called the oriented-polar representation of a 2D image. In this representation, two specific points, the centroid and the weighted mean point, are selected to form an initial ray, then the image is sampled with N angularly equispaced rays departing from the initial rays. Each ray contains a number of intersections and the distance information obtained from the centroid to the intersections. The shape recognition algorithm is based on the least total error of these two items of information. Together with a simple noise removal and a typical backpropagation neural network, this algorithm is simple, but the PSRI is achieved with a high recognition rate.
NASA Astrophysics Data System (ADS)
Chuang, H.-K.; Lin, M.-L.; Huang, W.-C.
2012-04-01
The Typhoon Morakot on August 2009 brought more than 2,000 mm of cumulative rainfall in southern Taiwan, the extreme rainfall event caused serious damage to the Kaoping River basin. The losses were mostly blamed on the landslides along sides of the river, and shifting of the watercourse even led to the failure of roads and bridges, as well as flooding and levees damage happened around the villages on flood bank and terraces. Alluvial fans resulted from debris flow of stream feeders blocked the main watercourse and debris dam was even formed and collapsed. These disasters have highlighted the importance of identification and map the watercourse alteration, surface features of flood plain area and artificial structures soon after the catastrophic typhoon event for natural hazard mitigation. Interpretation of remote sensing images is an efficient approach to acquire spatial information for vast areas, therefore making it suitable for the differentiation of terrain and objects near the vast flood plain areas in a short term. The object-oriented image analysis program (Definiens Developer 7.0) and multi-band high resolution satellite images (QuickBird, DigitalGlobe) was utilized to interpret the flood plain features from Liouguei to Baolai of the the Kaoping River basin after Typhoon Morakot. Object-oriented image interpretation is the process of using homogenized image blocks as elements instead of pixels for different shapes, textures and the mutual relationships of adjacent elements, as well as categorized conditions and rules for semi-artificial interpretation of surface features. Digital terrain models (DTM) are also employed along with the above process to produce layers with specific "landform thematic layers". These layers are especially helpful in differentiating some confusing categories in the spectrum analysis with improved accuracy, such as landslides and riverbeds, as well as terraces, riverbanks, which are of significant engineering importance in disaster mitigation. In this study, an automatic and fast image interpretation process for eight surface features including main channel, secondary channel, sandbar, flood plain, river terrace, alluvial fan, landslide, and the nearby artificial structures in the mountainous flood plain is proposed. Images along timelines can even be compared in order to differentiate historical events such as village inundations, failure of roads, bridges and levees, as well as alternation of watercourse, and therefore can be used as references for safety evaluation of engineering structures near rivers, disaster prevention and mitigation, and even future land-use planning. Keywords: Flood plain area, Remote sensing, Object-oriented, Surface feature interpretation, Terrain analysis, Thematic layer, Typhoon Morakot
Railway obstacle detection algorithm using neural network
NASA Astrophysics Data System (ADS)
Yu, Mingyang; Yang, Peng; Wei, Sen
2018-05-01
Aiming at the difficulty of detection of obstacle in outdoor railway scene, a data-oriented method based on neural network to obtain image objects is proposed. First, we mark objects of images(such as people, trains, animals) acquired on the Internet. and then use the residual learning units to build Fast R-CNN framework. Then, the neural network is trained to get the target image characteristics by using stochastic gradient descent algorithm. Finally, a well-trained model is used to identify an outdoor railway image. if it includes trains and other objects, it will issue an alert. Experiments show that the correct rate of warning reached 94.85%.
NASA Astrophysics Data System (ADS)
Wang, G. H.; Wang, H. B.; Fan, W. F.; Liu, Y.; Chen, C.
2018-04-01
In view of the traditional change detection algorithm mainly depends on the spectral information image spot, failed to effectively mining and fusion of multi-image feature detection advantage, the article borrows the ideas of object oriented analysis proposed a multi feature fusion of remote sensing image change detection algorithm. First by the multi-scale segmentation of image objects based; then calculate the various objects of color histogram and linear gradient histogram; utilizes the color distance and edge line feature distance between EMD statistical operator in different periods of the object, using the adaptive weighted method, the color feature distance and edge in a straight line distance of combination is constructed object heterogeneity. Finally, the curvature histogram analysis image spot change detection results. The experimental results show that the method can fully fuse the color and edge line features, thus improving the accuracy of the change detection.
Objective lens simultaneously optimized for pupil ghosting, wavefront delivery and pupil imaging
NASA Technical Reports Server (NTRS)
Olczak, Eugene G (Inventor)
2011-01-01
An objective lens includes multiple optical elements disposed between a first end and a second end, each optical element oriented along an optical axis. Each optical surface of the multiple optical elements provides an angle of incidence to a marginal ray that is above a minimum threshold angle. This threshold angle minimizes pupil ghosts that may enter an interferometer. The objective lens also optimizes wavefront delivery and pupil imaging onto an optical surface under test.
Malamy, J E; Shribak, M
2018-06-01
Epithelial cell dynamics can be difficult to study in intact animals or tissues. Here we use the medusa form of the hydrozoan Clytia hemisphaerica, which is covered with a monolayer of epithelial cells, to test the efficacy of an orientation-independent differential interference contrast microscope for in vivo imaging of wound healing. Orientation-independent differential interference contrast provides an unprecedented resolution phase image of epithelial cells closing a wound in a live, nontransgenic animal model. In particular, the orientation-independent differential interference contrast microscope equipped with a 40x/0.75NA objective lens and using the illumination light with wavelength 546 nm demonstrated a resolution of 460 nm. The repair of individual cells, the adhesion of cells to close a gap, and the concomitant contraction of these cells during closure is clearly visualized. © 2018 The Authors Journal of Microscopy © 2018 Royal Microscopical Society.
Modeling Of Object- And Scene-Prototypes With Hierarchically Structured Classes
NASA Astrophysics Data System (ADS)
Ren, Z.; Jensch, P.; Ameling, W.
1989-03-01
The success of knowledge-based image analysis methodology and implementation tools depends largely on an appropriately and efficiently built model wherein the domain-specific context information about and the inherent structure of the observed image scene have been encoded. For identifying an object in an application environment a computer vision system needs to know firstly the description of the object to be found in an image or in an image sequence, secondly the corresponding relationships between object descriptions within the image sequence. This paper presents models of image objects scenes by means of hierarchically structured classes. Using the topovisual formalism of graph and higraph, we are currently studying principally the relational aspect and data abstraction of the modeling in order to visualize the structural nature resident in image objects and scenes, and to formalize. their descriptions. The goal is to expose the structure of image scene and the correspondence of image objects in the low level image interpretation. process. The object-based system design approach has been applied to build the model base. We utilize the object-oriented programming language C + + for designing, testing and implementing the abstracted entity classes and the operation structures which have been modeled topovisually. The reference images used for modeling prototypes of objects and scenes are from industrial environments as'well as medical applications.
[An object-oriented remote sensing image segmentation approach based on edge detection].
Tan, Yu-Min; Huai, Jian-Zhu; Tang, Zhong-Shi
2010-06-01
Satellite sensor technology endorsed better discrimination of various landscape objects. Image segmentation approaches to extracting conceptual objects and patterns hence have been explored and a wide variety of such algorithms abound. To this end, in order to effectively utilize edge and topological information in high resolution remote sensing imagery, an object-oriented algorithm combining edge detection and region merging is proposed. Susan edge filter is firstly applied to the panchromatic band of Quickbird imagery with spatial resolution of 0.61 m to obtain the edge map. Thanks to the resulting edge map, a two-phrase region-based segmentation method operates on the fusion image from panchromatic and multispectral Quickbird images to get the final partition result. In the first phase, a quad tree grid consisting of squares with sides parallel to the image left and top borders agglomerates the square subsets recursively where the uniform measure is satisfied to derive image object primitives. Before the merger of the second phrase, the contextual and spatial information, (e. g., neighbor relationship, boundary coding) of the resulting squares are retrieved efficiently by means of the quad tree structure. Then a region merging operation is performed with those primitives, during which the criterion for region merging integrates edge map and region-based features. This approach has been tested on the QuickBird images of some site in Sanxia area and the result is compared with those of ENVI Zoom Definiens. In addition, quantitative evaluation of the quality of segmentation results is also presented. Experiment results demonstrate stable convergence and efficiency.
NASA Astrophysics Data System (ADS)
Lemma, Hanibal; Frankl, Amaury; Poesen, Jean; Adgo, Enyew; Nyssen, Jan
2017-04-01
Object-oriented image classification has been gaining prominence in the field of remote sensing and provides a valid alternative to the 'traditional' pixel based methods. Recent studies have proven the superiority of the object-based approach. So far, object-oriented land cover classifications have been applied either at limited spatial coverages (ranging 2 to 1091 km2) or by using very high resolution (0.5-16 m) imageries. The main aim of this study is to drive land cover information for large area from Landsat 8 OLI surface reflectance using the Estimation of Scale Parameter (ESP) tool and the object oriented software eCognition. The available land cover map of Lake Tana Basin (Ethiopia) is about 20 years old with a courser spatial scale (1:250,000) and has limited use for environmental modelling and monitoring studies. Up-to-date and basin wide land cover maps are essential to overcome haphazard natural resources management, land degradation and reduced agricultural production. Indeed, object-oriented approach involves image segmentation prior to classification, i.e. adjacent similar pixels are aggregated into segments as long as the heterogeneity in the spectral and spatial domains is minimized. For each segmented object, different attributes (spectral, textural and shape) were calculated and used for in subsequent classification analysis. Moreover, the commonly used error matrix is employed to determine the quality of the land cover map. As a result, the multiresolution segmentation (with parameters of scale=30, shape=0.3 and Compactness=0.7) produces highly homogeneous image objects as it is observed in different sample locations in google earth. Out of the 15,089 km2 area of the basin, cultivated land is dominant (69%) followed by water bodies (21%), grassland (4.8%), forest (3.7%) and shrubs (1.1%). Wetlands, artificial surfaces and bare land cover only about 1% of the basin. The overall classification accuracy is 80% with a Kappa coefficient of 0.75. With regard to individual classes, the classification show higher Producer's and User's accuracy (above 84%) for cultivated land, water bodies and forest, but lower (less than 70%) for shrubs, bare land and grassland. Key words: accuracy assessment, eCognition, Estimation of Scale Parameter, land cover, Landsat 8, remote sensing
Nakamura, Kimihiro; Makuuchi, Michiru; Nakajima, Yasoichi
2014-01-01
Previous studies show that the primate and human visual system automatically generates a common and invariant representation from a visual object image and its mirror reflection. For humans, however, this mirror-image generalization seems to be partially suppressed through literacy acquisition, since literate adults have greater difficulty in recognizing mirror images of letters than those of other visual objects. At the neural level, such category-specific effect on mirror-image processing has been associated with the left occpitotemporal cortex (L-OTC), but it remains unclear whether the apparent "inhibition" on mirror letters is mediated by suppressing mirror-image representations covertly generated from normal letter stimuli. Using transcranial magnetic stimulation (TMS), we examined how transient disruption of the L-OTC affects mirror-image recognition during a same-different judgment task, while varying the semantic category (letters and non-letter objects), identity (same or different), and orientation (same or mirror-reversed) of the first and second stimuli. We found that magnetic stimulation of the L-OTC produced a significant delay in mirror-image recognition for letter-strings but not for other objects. By contrast, this category specific impact was not observed when TMS was applied to other control sites, including the right homologous area and vertex. These results thus demonstrate a causal link between the L-OTC and mirror-image discrimination in literate people. We further suggest that left-right sensitivity for letters is not achieved by a local inhibitory mechanism in the L-OTC but probably relies on the inter-regional coupling with other orientation-sensitive occipito-parietal regions.
Parallel object-oriented data mining system
Kamath, Chandrika; Cantu-Paz, Erick
2004-01-06
A data mining system uncovers patterns, associations, anomalies and other statistically significant structures in data. Data files are read and displayed. Objects in the data files are identified. Relevant features for the objects are extracted. Patterns among the objects are recognized based upon the features. Data from the Faint Images of the Radio Sky at Twenty Centimeters (FIRST) sky survey was used to search for bent doubles. This test was conducted on data from the Very Large Array in New Mexico which seeks to locate a special type of quasar (radio-emitting stellar object) called bent doubles. The FIRST survey has generated more than 32,000 images of the sky to date. Each image is 7.1 megabytes, yielding more than 100 gigabytes of image data in the entire data set.
NASA Astrophysics Data System (ADS)
Anai, T.; Kochi, N.; Yamada, M.; Sasaki, T.; Otani, H.; Sasaki, D.; Nishimura, S.; Kimoto, K.; Yasui, N.
2015-05-01
As the 3D image measurement software is now widely used with the recent development of computer-vision technology, the 3D measurement from the image is now has acquired the application field from desktop objects as wide as the topography survey in large geographical areas. Especially, the orientation, which used to be a complicated process in the heretofore image measurement, can be now performed automatically by simply taking many pictures around the object. And in the case of fully textured object, the 3D measurement of surface features is now done all automatically from the orientated images, and greatly facilitated the acquisition of the dense 3D point cloud from images with high precision. With all this development in the background, in the case of small and the middle size objects, we are now furnishing the all-around 3D measurement by a single digital camera sold on the market. And we have also developed the technology of the topographical measurement with the air-borne images taken by a small UAV [1~5]. In this present study, in the case of the small size objects, we examine the accuracy of surface measurement (Matching) by the data of the experiments. And as to the topographic measurement, we examine the influence of GCP distribution on the accuracy by the data of the experiments. Besides, we examined the difference of the analytical results in each of the 3D image measurement software. This document reviews the processing flow of orientation and the 3D measurement of each software and explains the feature of the each software. And as to the verification of the precision of stereo-matching, we measured the test plane and the test sphere of the known form and assessed the result. As to the topography measurement, we used the air-borne image data photographed at the test field in Yadorigi of Matsuda City, Kanagawa Prefecture JAPAN. We have constructed Ground Control Point which measured by RTK-GPS and Total Station. And we show the results of analysis made in each of the 3D image measurement software. Further, we deepen the study on the influence of the distribution of GCP on the precision.
NASA Astrophysics Data System (ADS)
Neulist, Joerg; Armbruster, Walter
2005-05-01
Model-based object recognition in range imagery typically involves matching the image data to the expected model data for each feasible model and pose hypothesis. Since the matching procedure is computationally expensive, the key to efficient object recognition is the reduction of the set of feasible hypotheses. This is particularly important for military vehicles, which may consist of several large moving parts such as the hull, turret, and gun of a tank, and hence require an eight or higher dimensional pose space to be searched. The presented paper outlines techniques for reducing the set of feasible hypotheses based on an estimation of target dimensions and orientation. Furthermore, the presence of a turret and a main gun and their orientations are determined. The vehicle parts dimensions as well as their error estimates restrict the number of model hypotheses whereas the position and orientation estimates and their error bounds reduce the number of pose hypotheses needing to be verified. The techniques are applied to several hundred laser radar images of eight different military vehicles with various part classifications and orientations. On-target resolution in azimuth, elevation and range is about 30 cm. The range images contain up to 20% dropouts due to atmospheric absorption. Additionally some target retro-reflectors produce outliers due to signal crosstalk. The presented algorithms are extremely robust with respect to these and other error sources. The hypothesis space for hull orientation is reduced to about 5 degrees as is the error for turret rotation and gun elevation, provided the main gun is visible.
Intermittent behavior in the brain neuronal network in the perception of ambiguous images
NASA Astrophysics Data System (ADS)
Hramov, Alexander E.; Kurovskaya, Maria K.; Runnova, Anastasiya E.; Zhuravlev, Maxim O.; Grubov, Vadim V.; Koronovskii, Alexey A.; Pavlov, Alexey N.; Pisarchik, Alexander N.
2017-03-01
Characteristics of intermittency during the perception of ambiguous images have been studied in the case the Necker cube image has been used as a bistable object for demonstration in the experiments, with EEG being simultaneously measured. Distributions of time interval lengths corresponding to the left-oriented and right-oriented Necker cube perception have been obtain. EEG data have been analyzed using continuous wavelet transform which was shown that the destruction of alpha rhythm with accompanying generation of high frequency oscillations can serve as a electroencephalographical marker of Necker cube recognition process in human brain.
Nguyen, Dat Tien; Park, Kang Ryoung
2016-07-21
With higher demand from users, surveillance systems are currently being designed to provide more information about the observed scene, such as the appearance of objects, types of objects, and other information extracted from detected objects. Although the recognition of gender of an observed human can be easily performed using human perception, it remains a difficult task when using computer vision system images. In this paper, we propose a new human gender recognition method that can be applied to surveillance systems based on quality assessment of human areas in visible light and thermal camera images. Our research is novel in the following two ways: First, we utilize the combination of visible light and thermal images of the human body for a recognition task based on quality assessment. We propose a quality measurement method to assess the quality of image regions so as to remove the effects of background regions in the recognition system. Second, by combining the features extracted using the histogram of oriented gradient (HOG) method and the measured qualities of image regions, we form a new image features, called the weighted HOG (wHOG), which is used for efficient gender recognition. Experimental results show that our method produces more accurate estimation results than the state-of-the-art recognition method that uses human body images.
Nguyen, Dat Tien; Park, Kang Ryoung
2016-01-01
With higher demand from users, surveillance systems are currently being designed to provide more information about the observed scene, such as the appearance of objects, types of objects, and other information extracted from detected objects. Although the recognition of gender of an observed human can be easily performed using human perception, it remains a difficult task when using computer vision system images. In this paper, we propose a new human gender recognition method that can be applied to surveillance systems based on quality assessment of human areas in visible light and thermal camera images. Our research is novel in the following two ways: First, we utilize the combination of visible light and thermal images of the human body for a recognition task based on quality assessment. We propose a quality measurement method to assess the quality of image regions so as to remove the effects of background regions in the recognition system. Second, by combining the features extracted using the histogram of oriented gradient (HOG) method and the measured qualities of image regions, we form a new image features, called the weighted HOG (wHOG), which is used for efficient gender recognition. Experimental results show that our method produces more accurate estimation results than the state-of-the-art recognition method that uses human body images. PMID:27455264
Sensory Interactive Teleoperator Robotic Grasping
NASA Technical Reports Server (NTRS)
Alark, Keli; Lumia, Ron
1997-01-01
As the technological world strives for efficiency, the need for economical equipment that increases operator proficiency in minimal time is fundamental. This system links a CCD camera, a controller and a robotic arm to a computer vision system to provide an alternative method of image analysis. The machine vision system which was employed possesses software tools for acquiring and analyzing images which are received through a CCD camera. After feature extraction on the object in the image was performed, information about the object's location, orientation and distance from the robotic gripper is sent to the robot controller so that the robot can manipulate the object.
Intermittency in electric brain activity in the perception of ambiguous images
NASA Astrophysics Data System (ADS)
Kurovskaya, Maria K.; Runnova, Anastasiya E.; Zhuravlev, Maxim O.; Grubov, Vadim V.; Koronovskii, Alexey A.; Pavlov, Alexey N.; Pisarchik, Alexander N.
2017-04-01
Present paper is devoted to the study of intermittency during the perception of bistable Necker cube image being a good example of an ambiguous object, with simultaneous measurement of EEG. Distributions of time interval lengths corresponding to the left-oriented and right-oriented cube perception have been obtain. EEG data have been analyzed using continuous wavelet transform and it was shown that the destruction of alpha rhythm with accompanying generation of high frequency oscillations can serve as a marker of Necker cube recognition process.
The objective assessment of experts' and novices' suturing skills using an image analysis program.
Frischknecht, Adam C; Kasten, Steven J; Hamstra, Stanley J; Perkins, Noel C; Gillespie, R Brent; Armstrong, Thomas J; Minter, Rebecca M
2013-02-01
To objectively assess suturing performance using an image analysis program and to provide validity evidence for this assessment method by comparing experts' and novices' performance. In 2009, the authors used an image analysis program to extract objective variables from digital images of suturing end products obtained during a previous study involving third-year medical students (novices) and surgical faculty and residents (experts). Variables included number of stitches, stitch length, total bite size, travel, stitch orientation, total bite-size-to-travel ratio, and symmetry across the incision ratio. The authors compared all variables between groups to detect significant differences and two variables (total bite-size-to-travel ratio and symmetry across the incision ratio) to ideal values. Five experts and 15 novices participated. Experts' and novices' performances differed significantly (P < .05) with large effect sizes attributable to experience (Cohen d > 0.8) for total bite size (P = .009, d = 1.5), travel (P = .045, d = 1.1), total bite-size-to-travel ratio (P < .0001, d = 2.6), stitch orientation (P = .014,d = 1.4), and symmetry across the incision ratio (P = .022, d = 1.3). The authors found that a simple computer algorithm can extract variables from digital images of a running suture and rapidly provide quantitative summative assessment feedback. The significant differences found between groups confirm that this system can discriminate between skill levels. This image analysis program represents a viable training tool for objectively assessing trainees' suturing, a foundational skill for many medical specialties.
A study of earthquake-induced building detection by object oriented classification approach
NASA Astrophysics Data System (ADS)
Sabuncu, Asli; Damla Uca Avci, Zehra; Sunar, Filiz
2017-04-01
Among the natural hazards, earthquakes are the most destructive disasters and cause huge loss of lives, heavily infrastructure damages and great financial losses every year all around the world. According to the statistics about the earthquakes, more than a million earthquakes occur which is equal to two earthquakes per minute in the world. Natural disasters have brought more than 780.000 deaths approximately % 60 of all mortality is due to the earthquakes after 2001. A great earthquake took place at 38.75 N 43.36 E in the eastern part of Turkey in Van Province on On October 23th, 2011. 604 people died and about 4000 buildings seriously damaged and collapsed after this earthquake. In recent years, the use of object oriented classification approach based on different object features, such as spectral, textural, shape and spatial information, has gained importance and became widespread for the classification of high-resolution satellite images and orthophotos. The motivation of this study is to detect the collapsed buildings and debris areas after the earthquake by using very high-resolution satellite images and orthophotos with the object oriented classification and also see how well remote sensing technology was carried out in determining the collapsed buildings. In this study, two different land surfaces were selected as homogenous and heterogeneous case study areas. In the first step of application, multi-resolution segmentation was applied and optimum parameters were selected to obtain the objects in each area after testing different color/shape and compactness/smoothness values. In the next step, two different classification approaches, namely "supervised" and "unsupervised" approaches were applied and their classification performances were compared. Object-based Image Analysis (OBIA) was performed using e-Cognition software.
[Object-oriented aquatic vegetation extracting approach based on visible vegetation indices.
Jing, Ran; Deng, Lei; Zhao, Wen Ji; Gong, Zhao Ning
2016-05-01
Using the estimation of scale parameters (ESP) image segmentation tool to determine the ideal image segmentation scale, the optimal segmented image was created by the multi-scale segmentation method. Based on the visible vegetation indices derived from mini-UAV imaging data, we chose a set of optimal vegetation indices from a series of visible vegetation indices, and built up a decision tree rule. A membership function was used to automatically classify the study area and an aquatic vegetation map was generated. The results showed the overall accuracy of image classification using the supervised classification was 53.7%, and the overall accuracy of object-oriented image analysis (OBIA) was 91.7%. Compared with pixel-based supervised classification method, the OBIA method improved significantly the image classification result and further increased the accuracy of extracting the aquatic vegetation. The Kappa value of supervised classification was 0.4, and the Kappa value based OBIA was 0.9. The experimental results demonstrated that using visible vegetation indices derived from the mini-UAV data and OBIA method extracting the aquatic vegetation developed in this study was feasible and could be applied in other physically similar areas.
Tsai, Chung-Yu
2012-04-01
An exact analytical approach is proposed for measuring the six-degree-of-freedom (6-DOF) motion of an object using the image-orientation-change (IOC) method. The proposed measurement system comprises two reflector systems, where each system consists of two reflectors and one position sensing detector (PSD). The IOCs of the object in the two reflector systems are described using merit functions determined from the respective PSD readings before and after motion occurs, respectively. The three rotation variables are then determined analytically from the eigenvectors of the corresponding merit functions. After determining the three rotation variables, the order of the translation equations is downgraded to a linear form. Consequently, the solution for the three translation variables can also be analytically determined. As a result, the motion transformation matrix describing the 6-DOF motion of the object is fully determined. The validity of the proposed approach is demonstrated by means of an illustrative example.
NASA Astrophysics Data System (ADS)
Wu, M. F.; Sun, Z. C.; Yang, B.; Yu, S. S.
2016-11-01
In order to reduce the “salt and pepper” in pixel-based urban land cover classification and expand the application of fusion of multi-source data in the field of urban remote sensing, WorldView-2 imagery and airborne Light Detection and Ranging (LiDAR) data were used to improve the classification of urban land cover. An approach of object- oriented hierarchical classification was proposed in our study. The processing of proposed method consisted of two hierarchies. (1) In the first hierarchy, LiDAR Normalized Digital Surface Model (nDSM) image was segmented to objects. The NDVI, Costal Blue and nDSM thresholds were set for extracting building objects. (2) In the second hierarchy, after removing building objects, WorldView-2 fused imagery was obtained by Haze-ratio-based (HR) fusion, and was segmented. A SVM classifier was applied to generate road/parking lot, vegetation and bare soil objects. (3) Trees and grasslands were split based on an nDSM threshold (2.4 meter). The results showed that compared with pixel-based and non-hierarchical object-oriented approach, proposed method provided a better performance of urban land cover classification, the overall accuracy (OA) and overall kappa (OK) improved up to 92.75% and 0.90. Furthermore, proposed method reduced “salt and pepper” in pixel-based classification, improved the extraction accuracy of buildings based on LiDAR nDSM image segmentation, and reduced the confusion between trees and grasslands through setting nDSM threshold.
The analysis of selected orientation methods of architectural objects' scans
NASA Astrophysics Data System (ADS)
Markiewicz, Jakub S.; Kajdewicz, Irmina; Zawieska, Dorota
2015-05-01
The terrestrial laser scanning is commonly used in different areas, inter alia in modelling architectural objects. One of the most important part of TLS data processing is scans registration. It significantly affects the accuracy of generation of high resolution photogrammetric documentation. This process is time consuming, especially in case of a large number of scans. It is mostly based on an automatic detection and a semi-automatic measurement of control points placed on the object. In case of the complicated historical buildings, sometimes it is forbidden to place survey targets on an object or it may be difficult to distribute survey targets in the optimal way. Such problems encourage the search for the new methods of scan registration which enable to eliminate the step of placing survey targets on the object. In this paper the results of target-based registration method are presented The survey targets placed on the walls of historical chambers of the Museum of King Jan III's Palace at Wilanów and on the walls of ruins of the Bishops Castle in Iłża were used for scan orientation. Several variants of orientation were performed, taking into account different placement and different number of survey marks. Afterwards, during next research works, raster images were generated from scans and the SIFT and SURF algorithms for image processing were used to automatically search for corresponding natural points. The case of utilisation of automatically identified points for TLS data orientation was analysed. The results of both methods for TLS data registration were summarized and presented in numerical and graphical forms.
Precise Determination of the Orientation of the Solar Image
NASA Astrophysics Data System (ADS)
Győri, L.
2010-12-01
Accurate heliographic coordinates of objects on the Sun have to be known in several fields of solar physics. One of the factors that affect the accuracy of the measurements of the heliographic coordinates is the accuracy of the orientation of a solar image. In this paper the well-known drift method for determining the orientation of the solar image is applied to data taken with a solar telescope equipped with a CCD camera. The factors that influence the accuracy of the method are systematically discussed, and the necessary corrections are determined. These factors are as follows: the trajectory of the center of the solar disk on the CCD with the telescope drive turned off, the astronomical refraction, the change of the declination of the Sun, and the optical distortion of the telescope. The method can be used on any solar telescope that is equipped with a CCD camera and is capable of taking solar full-disk images. As an example to illustrate the method and its application, the orientation of solar images taken with the Gyula heliograph is determined. As a byproduct, a new method to determine the optical distortion of a solar telescope is proposed.
ERIC Educational Resources Information Center
Fernandes, Tânia; Leite, Isabel; Kolinsky, Régine
2016-01-01
At what point in reading development does literacy impact object recognition and orientation processing? Is it specific to mirror images? To answer these questions, forty-six 5- to 7-year-old preschoolers and first graders performed two same-different tasks differing in the matching criterion-orientation-based versus shape-based (orientation…
Perceiving environmental structure from optical motion
NASA Technical Reports Server (NTRS)
Lappin, Joseph S.
1991-01-01
Generally speaking, one of the most important sources of optical information about environmental structure is known to be the deforming optical patterns produced by the movements of the observer (pilot) or environmental objects. As an observer moves through a rigid environment, the projected optical patterns of environmental objects are systematically transformed according to their orientations and positions in 3D space relative to those of the observer. The detailed characteristics of these deforming optical patterns carry information about the 3D structure of the objects and about their locations and orientations relative to those of the observer. The specific geometrical properties of moving images that may constitute visually detected information about the shapes and locations of environmental objects is examined.
An object-oriented simulator for 3D digital breast tomosynthesis imaging system.
Seyyedi, Saeed; Cengiz, Kubra; Kamasak, Mustafa; Yildirim, Isa
2013-01-01
Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values.
An Object-Oriented Simulator for 3D Digital Breast Tomosynthesis Imaging System
Cengiz, Kubra
2013-01-01
Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values. PMID:24371468
Chouinard, Philippe A; Meena, Deiter K; Whitwell, Robert L; Hilchey, Matthew D; Goodale, Melvyn A
2017-05-01
We used TMS to assess the causal roles of the lateral occipital (LO) and caudal intraparietal sulcus (cIPS) areas in the perceptual discrimination of object features. All participants underwent fMRI to localize these areas using a protocol in which they passively viewed images of objects that varied in both form and orientation. fMRI identified six significant brain regions: LO, cIPS, and the fusiform gyrus, bilaterally. In a separate experimental session, we applied TMS to LO or cIPS while the same participants performed match-to-sample form or orientation discrimination tasks. Compared with sham stimulation, TMS to either the left or right LO increased RTs for form but not orientation discrimination, supporting a critical role for LO in form processing for perception- and judgment-based tasks. In contrast, we did not observe any effects when we applied TMS to cIPS. Thus, despite the clear functional evidence of engagement for both LO and cIPS during the passive viewing of objects in the fMRI experiment, the TMS experiment revealed that cIPS is not critical for making perceptual judgments about their form or orientation.
Apparatus and method for imaging metallic objects using an array of giant magnetoresistive sensors
Chaiken, Alison
2000-01-01
A portable, low-power, metallic object detector and method for providing an image of a detected metallic object. In one embodiment, the present portable low-power metallic object detector an array of giant magnetoresistive (GMR) sensors. The array of GMR sensors is adapted for detecting the presence of and compiling image data of a metallic object. In the embodiment, the array of GMR sensors is arranged in a checkerboard configuration such that axes of sensitivity of alternate GMR sensors are orthogonally oriented. An electronics portion is coupled to the array of GMR sensors. The electronics portion is adapted to receive and process the image data of the metallic object compiled by the array of GMR sensors. The embodiment also includes a display unit which is coupled to the electronics portion. The display unit is adapted to display a graphical representation of the metallic object detected by the array of GMR sensors. In so doing, a graphical representation of the detected metallic object is provided.
An Investigation of Automatic Change Detection for Topographic Map Updating
NASA Astrophysics Data System (ADS)
Duncan, P.; Smit, J.
2012-08-01
Changes to the landscape are constantly occurring and it is essential for geospatial and mapping organisations that these changes are regularly detected and captured, so that map databases can be updated to reflect the current status of the landscape. The Chief Directorate of National Geospatial Information (CD: NGI), South Africa's national mapping agency, currently relies on manual methods of detecting changes and capturing these changes. These manual methods are time consuming and labour intensive, and rely on the skills and interpretation of the operator. It is therefore necessary to move towards more automated methods in the production process at CD: NGI. The aim of this research is to do an investigation into a methodology for automatic or semi-automatic change detection for the purpose of updating topographic databases. The method investigated for detecting changes is through image classification as well as spatial analysis and is focussed on urban landscapes. The major data input into this study is high resolution aerial imagery and existing topographic vector data. Initial results indicate the traditional pixel-based image classification approaches are unsatisfactory for large scale land-use mapping and that object-orientated approaches hold more promise. Even in the instance of object-oriented image classification generalization of techniques on a broad-scale has provided inconsistent results. A solution may lie with a hybrid approach of pixel and object-oriented techniques.
NASA Astrophysics Data System (ADS)
Zhongqin, G.; Chen, Y.
2017-12-01
Abstract Quickly identify the spatial distribution of landslides automatically is essential for the prevention, mitigation and assessment of the landslide hazard. It's still a challenging job owing to the complicated characteristics and vague boundary of the landslide areas on the image. The high resolution remote sensing image has multi-scales, complex spatial distribution and abundant features, the object-oriented image classification methods can make full use of the above information and thus effectively detect the landslides after the hazard happened. In this research we present a new semi-supervised workflow, taking advantages of recent object-oriented image analysis and machine learning algorithms to quick locate the different origins of landslides of some areas on the southwest part of China. Besides a sequence of image segmentation, feature selection, object classification and error test, this workflow ensemble the feature selection and classifier selection. The feature this study utilized were normalized difference vegetation index (NDVI) change, textural feature derived from the gray level co-occurrence matrices (GLCM), spectral feature and etc. The improvement of this study shows this algorithm significantly removes some redundant feature and the classifiers get fully used. All these improvements lead to a higher accuracy on the determination of the shape of landslides on the high resolution remote sensing image, in particular the flexibility aimed at different kinds of landslides.
Jacques, Eveline; Buytaert, Jan; Wells, Darren M; Lewandowski, Michal; Bennett, Malcolm J; Dirckx, Joris; Verbelen, Jean-Pierre; Vissenberg, Kris
2013-06-01
Image acquisition is an important step in the study of cytoskeleton organization. As visual interpretations and manual measurements of digital images are prone to errors and require a great amount of time, a freely available software package named MicroFilament Analyzer (MFA) was developed. The goal was to provide a tool that facilitates high-throughput analysis to determine the orientation of filamentous structures on digital images in a more standardized, objective and repeatable way. Here, the rationale and applicability of the program is demonstrated by analyzing the microtubule patterns in epidermal cells of control and gravi-stimulated Arabidopsis thaliana roots. Differential expansion of cells on either side of the root results in downward bending of the root tip. As cell expansion depends on the properties of the cell wall, this may imply a differential orientation of cellulose microfibrils. As cellulose deposition is orchestrated by cortical microtubules, the microtubule patterns were analyzed. The MFA program detects the filamentous structures on the image and identifies the main orientation(s) within individual cells. This revealed four distinguishable microtubule patterns in root epidermal cells. The analysis indicated that gravitropic stimulation and developmental age are both significant factors that determine microtubule orientation. Moreover, the data show that an altered microtubule pattern does not precede differential expansion. Other possible applications are also illustrated, including field emission scanning electron micrographs of cellulose microfibrils in plant cell walls and images of fluorescent actin. © 2013 The Authors The Plant Journal © 2013 John Wiley & Sons Ltd.
Reweighted mass center based object-oriented sparse subspace clustering for hyperspectral images
NASA Astrophysics Data System (ADS)
Zhai, Han; Zhang, Hongyan; Zhang, Liangpei; Li, Pingxiang
2016-10-01
Considering the inevitable obstacles faced by the pixel-based clustering methods, such as salt-and-pepper noise, high computational complexity, and the lack of spatial information, a reweighted mass center based object-oriented sparse subspace clustering (RMC-OOSSC) algorithm for hyperspectral images (HSIs) is proposed. First, the mean-shift segmentation method is utilized to oversegment the HSI to obtain meaningful objects. Second, a distance reweighted mass center learning model is presented to extract the representative and discriminative features for each object. Third, assuming that all the objects are sampled from a union of subspaces, it is natural to apply the SSC algorithm to the HSI. Faced with the high correlation among the hyperspectral objects, a weighting scheme is adopted to ensure that the highly correlated objects are preferred in the procedure of sparse representation, to reduce the representation errors. Two widely used hyperspectral datasets were utilized to test the performance of the proposed RMC-OOSSC algorithm, obtaining high clustering accuracies (overall accuracy) of 71.98% and 89.57%, respectively. The experimental results show that the proposed method clearly improves the clustering performance with respect to the other state-of-the-art clustering methods, and it significantly reduces the computational time.
NASA Astrophysics Data System (ADS)
Zou, Bin; Lu, Da; Wu, Zhilu; Qiao, Zhijun G.
2016-05-01
The results of model-based target decomposition are the main features used to discriminate urban and non-urban area in polarimetric synthetic aperture radar (PolSAR) application. Traditional urban-area extraction methods based on modelbased target decomposition usually misclassified ground-trunk structure as urban-area or misclassified rotated urbanarea as forest. This paper introduces another feature named orientation angle to improve urban-area extraction scheme for the accurate mapping in urban by PolSAR image. The proposed method takes randomness of orientation angle into account for restriction of urban area first and, subsequently, implements rotation angle to improve results that oriented urban areas are recognized as double-bounce objects from volume scattering. ESAR L-band PolSAR data of the Oberpfaffenhofen Test Site Area was used to validate the proposed algorithm.
Unruh, Kathryn E.; Sasson, Noah J.; Shafer, Robin L.; Whitten, Allison; Miller, Stephanie J.; Turner-Brown, Lauren; Bodfish, James W.
2016-01-01
Background: Our experiences with the world play a critical role in neural and behavioral development. Children with autism spectrum disorder (ASD) spend a disproportionate amount of time seeking out, attending to, and engaging with aspects of their environment that are largely nonsocial in nature. In this study we adapted an established method for eliciting and quantifying aspects of visual choice behavior related to preference to test the hypothesis that preference for nonsocial sources of stimulation diminishes orientation and attention to social sources of stimulation in children with ASD. Method: Preferential viewing tasks can serve as objective measures of preference, with a greater proportion of viewing time to one item indicative of increased preference. The current task used gaze-tracking technology to examine patterns of visual orientation and attention to stimulus pairs that varied in social (faces) and nonsocial content (high autism interest or low autism interest). Participants included both adolescents diagnosed with ASD and typically developing; groups were matched on IQ and gender. Results: Repeated measures ANOVA revealed that individuals with ASD had a significantly greater latency to first fixate on social images when this image was paired with a high autism interest image, compared to a low autism interest image pairing. Participants with ASD showed greater total look time to objects, while typically developing participants preferred to look at faces. Groups also differed in number and average duration of fixations to social and object images. In the ASD group only, a measure of nonsocial interest was associated with reduced preference for social images when paired with high autism interest images. Conclusions: In ASD, the presence of nonsocial sources of stimulation can significantly increase the latency of look time to social sources of information. These results suggest that atypicalities in social motivation in ASD may be context-dependent, with a greater degree of plasticity than is assumed by existing social motivation accounts of ASD. PMID:28066169
Ur Rehman, Yasar Abbas; Tariq, Muhammad; Khan, Omar Usman
2015-01-01
Object localization plays a key role in many popular applications of Wireless Multimedia Sensor Networks (WMSN) and as a result, it has acquired a significant status for the research community. A significant body of research performs this task without considering node orientation, object geometry and environmental variations. As a result, the localized object does not reflect the real world scenarios. In this paper, a novel object localization scheme for WMSN has been proposed that utilizes range free localization, computer vision, and principle component analysis based algorithms. The proposed approach provides the best possible approximation of distance between a wmsn sink and an object, and the orientation of the object using image based information. Simulation results report 99% efficiency and an error ratio of 0.01 (around 1 ft) when compared to other popular techniques. PMID:26528919
Data Services - Naval Oceanography Portal
section Advanced Search... Sections Home Time Earth Orientation Astronomy Meteorology Oceanography Ice You the Earth's surface for any date and time. Apparent Disk of Solar System Object Creates a synthetic image of the telescopic appearance of the Moon or other solar system object for specified date and time
Determining Attitude of Object from Needle Map Using Extended Gaussian Image.
1983-04-01
D Images," Artificial Intellignece , Vol 17, August, 1981, 285-349. [6] Marr, D., Vision W.H. Freeman, San Francisco, 1982. [7] Brady, M...Witkin, A.P. "Recovering Surface Shape and Orientation from texture," Artificial Intellignec , Vol. 17, 1982, 17-47. [22] Horn, B.K.P., "SEQUINS and...AD-R131 617 DETERMINING ATTITUDE OF OBJECT FROM NEEDLE MAP USING I/i EXTENDED GAUSSIAN IMRGE(U) MASSACHUSETTS INST OF TECH CAMBRIDGE ARTIFICIAL
Zhou, Zhen; Huang, Jingfeng; Wang, Jing; Zhang, Kangyu; Kuang, Zhaomin; Zhong, Shiquan; Song, Xiaodong
2015-01-01
Most areas planted with sugarcane are located in southern China. However, remote sensing of sugarcane has been limited because useable remote sensing data are limited due to the cloudy climate of this region during the growing season and severe spectral mixing with other crops. In this study, we developed a methodology for automatically mapping sugarcane over large areas using time-series middle-resolution remote sensing data. For this purpose, two major techniques were used, the object-oriented method (OOM) and data mining (DM). In addition, time-series Chinese HJ-1 CCD images were obtained during the sugarcane growing period. Image objects were generated using a multi-resolution segmentation algorithm, and DM was implemented using the AdaBoost algorithm, which generated the prediction model. The prediction model was applied to the HJ-1 CCD time-series image objects, and then a map of the sugarcane planting area was produced. The classification accuracy was evaluated using independent field survey sampling points. The confusion matrix analysis showed that the overall classification accuracy reached 93.6% and that the Kappa coefficient was 0.85. Thus, the results showed that this method is feasible, efficient, and applicable for extrapolating the classification of other crops in large areas where the application of high-resolution remote sensing data is impractical due to financial considerations or because qualified images are limited. PMID:26528811
Zhou, Zhen; Huang, Jingfeng; Wang, Jing; Zhang, Kangyu; Kuang, Zhaomin; Zhong, Shiquan; Song, Xiaodong
2015-01-01
Most areas planted with sugarcane are located in southern China. However, remote sensing of sugarcane has been limited because useable remote sensing data are limited due to the cloudy climate of this region during the growing season and severe spectral mixing with other crops. In this study, we developed a methodology for automatically mapping sugarcane over large areas using time-series middle-resolution remote sensing data. For this purpose, two major techniques were used, the object-oriented method (OOM) and data mining (DM). In addition, time-series Chinese HJ-1 CCD images were obtained during the sugarcane growing period. Image objects were generated using a multi-resolution segmentation algorithm, and DM was implemented using the AdaBoost algorithm, which generated the prediction model. The prediction model was applied to the HJ-1 CCD time-series image objects, and then a map of the sugarcane planting area was produced. The classification accuracy was evaluated using independent field survey sampling points. The confusion matrix analysis showed that the overall classification accuracy reached 93.6% and that the Kappa coefficient was 0.85. Thus, the results showed that this method is feasible, efficient, and applicable for extrapolating the classification of other crops in large areas where the application of high-resolution remote sensing data is impractical due to financial considerations or because qualified images are limited.
Barta, András; Horváth, Gábor
2003-12-01
The apparent position, size, and shape of aerial objects viewed binocularly from water change as a result of the refraction of light at the water surface. Earlier studies of the refraction-distorted structure of the aerial binocular visual field of underwater observers were restricted to either vertically or horizontally oriented eyes. Here we calculate the position of the binocular image point of an aerial object point viewed by two arbitrarily positioned underwater eyes when the water surface is flat. Assuming that binocular image fusion is performed by appropriate vergent eye movements to bring the object's image onto the foveae, the structure of the aerial binocular visual field is computed and visualized as a function of the relative positions of the eyes. We also analyze two erroneous representations of the underwater imaging of aerial objects that have occurred in the literature. It is demonstrated that the structure of the aerial binocular visual field of underwater observers distorted by refraction is more complex than has been thought previously.
Representations of Shape in Object Recognition and Long-Term Visual Memory
1993-02-11
in anything other than linguistic terms ( Biederman , 1987 , for example). STATUS 1. Viewpoint-Dependent Features in Object Representation Tarr and...is object- based orientation-independent representations sufficient for "basic-level" categorization ( Biederman , 1987 ; Corballis, 1988). Alternatively...space. REFERENCES Biederman , I. ( 1987 ). Recognition-by-components: A theory of human image understanding. Psychological Review, 94,115-147. Cooper, L
Seed robustness of oriented relative fuzzy connectedness: core computation and its applications
NASA Astrophysics Data System (ADS)
Tavares, Anderson C. M.; Bejar, Hans H. C.; Miranda, Paulo A. V.
2017-02-01
In this work, we present a formal definition and an efficient algorithm to compute the cores of Oriented Relative Fuzzy Connectedness (ORFC), a recent seed-based segmentation technique. The core is a region where the seed can be moved without altering the segmentation, an important aspect for robust techniques and reduction of user effort. We show how ORFC cores can be used to build a powerful hybrid image segmentation approach. We also provide some new theoretical relations between ORFC and Oriented Image Foresting Transform (OIFT), as well as their cores. Experimental results among several methods show that the hybrid approach conserves high accuracy, avoids the shrinking problem and provides robustness to seed placement inside the desired object due to the cores properties.
Spatial and symbolic queries for 3D image data
NASA Astrophysics Data System (ADS)
Benson, Daniel C.; Zick, Gregory L.
1992-04-01
We present a query system for an object-oriented biomedical imaging database containing 3-D anatomical structures and their corresponding 2-D images. The graphical interface facilitates the formation of spatial queries, nonspatial or symbolic queries, and combined spatial/symbolic queries. A query editor is used for the creation and manipulation of 3-D query objects as volumes, surfaces, lines, and points. Symbolic predicates are formulated through a combination of text fields and multiple choice selections. Query results, which may include images, image contents, composite objects, graphics, and alphanumeric data, are displayed in multiple views. Objects returned by the query may be selected directly within the views for further inspection or modification, or for use as query objects in subsequent queries. Our image database query system provides visual feedback and manipulation of spatial query objects, multiple views of volume data, and the ability to combine spatial and symbolic queries. The system allows for incremental enhancement of existing objects and the addition of new objects and spatial relationships. The query system is designed for databases containing symbolic and spatial data. This paper discuses its application to data acquired in biomedical 3- D image reconstruction, but it is applicable to other areas such as CAD/CAM, geographical information systems, and computer vision.
ERIC Educational Resources Information Center
Cupchik, Gerald C.; Vartanian, Oshin; Crawley, Adrian; Mikulis, David J.
2009-01-01
When we view visual images in everyday life, our perception is oriented toward object identification. In contrast, when viewing visual images "as artworks", we also tend to experience subjective reactions to their stylistic and structural properties. This experiment sought to determine how cognitive control and perceptual facilitation contribute…
Functional implications of orientation maps in primary visual cortex
NASA Astrophysics Data System (ADS)
Koch, Erin; Jin, Jianzhong; Alonso, Jose M.; Zaidi, Qasim
2016-11-01
Stimulus orientation in the primary visual cortex of primates and carnivores is mapped as iso-orientation domains radiating from pinwheel centres, where orientation preferences of neighbouring cells change circularly. Whether this orientation map has a function is currently debated, because many mammals, such as rodents, do not have such maps. Here we show that two fundamental properties of visual cortical responses, contrast saturation and cross-orientation suppression, are stronger within cat iso-orientation domains than at pinwheel centres. These differences develop when excitation (not normalization) from neighbouring oriented neurons is applied to different cortical orientation domains and then balanced by inhibition from un-oriented neurons. The functions of the pinwheel mosaic emerge from these local intra-cortical computations: Narrower tuning, greater cross-orientation suppression and higher contrast gain of iso-orientation cells facilitate extraction of object contours from images, whereas broader tuning, greater linearity and less suppression of pinwheel cells generate selectivity for surface patterns and textures.
NASA Astrophysics Data System (ADS)
Jiao, Q. S.; Luo, Y.; Shen, W. H.; Li, Q.; Wang, X.
2018-04-01
Jiuzhaigou earthquake led to the collapse of the mountains and formed lots of landslides in Jiuzhaigou scenic spot and surrounding roads which caused road blockage and serious ecological damage. Due to the urgency of the rescue, the authors carried unmanned aerial vehicle (UAV) and entered the disaster area as early as August 9 to obtain the aerial images near the epicenter. On the basis of summarizing the earthquake landslides characteristics in aerial images, by using the object-oriented analysis method, landslides image objects were obtained by multi-scale segmentation, and the feature rule set of each level was automatically built by SEaTH (Separability and Thresholds) algorithm to realize the rapid landslide extraction. Compared with visual interpretation, object-oriented automatic landslides extraction method achieved an accuracy of 94.3 %. The spatial distribution of the earthquake landslide had a significant positive correlation with slope and relief and had a negative correlation with the roughness, but no obvious correlation with the aspect. The relationship between the landslide and the aspect was not found and the probable reason may be that the distance between the study area and the seismogenic fault was too far away. This work provided technical support for the earthquake field emergency, earthquake landslide prediction and disaster loss assessment.
Observation of Phase Objects by Using an X-ray Microscope with a Foucault Knife-Edge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watanabe, N.; Sasaya, T.; Imai, Y.
2011-09-09
An x-ray microscope with a zone plate was assembled at the synchrotron radiation source of BL3C, Photon Factory. A Foucault knife-edge was set at the back focal plate of the objective zone plate and phase retrieval was tested by scanning the knife-edge. A preliminary result shows that scanning the knife-edge during exposure was effective for phase retrieval. Phase-contrast tomography was investigated using differential projection images calculated from two Schlieren images with the oppositely oriented knife-edges. Fairly good reconstruction images of polystyrene beads and spores could be obtained.
Determining fast orientation changes of multi-spectral line cameras from the primary images
NASA Astrophysics Data System (ADS)
Wohlfeil, Jürgen
2012-01-01
Fast orientation changes of airborne and spaceborne line cameras cannot always be avoided. In such cases it is essential to measure them with high accuracy to ensure a good quality of the resulting imagery products. Several approaches exist to support the orientation measurement by using optical information received through the main objective/telescope. In this article an approach is proposed that allows the determination of non-systematic orientation changes between every captured line. It does not require any additional camera hardware or onboard processing capabilities but the payload images and a rough estimate of the camera's trajectory. The approach takes advantage of the typical geometry of multi-spectral line cameras with a set of linear sensor arrays for different spectral bands on the focal plane. First, homologous points are detected within the heavily distorted images of different spectral bands. With their help a connected network of geometrical correspondences can be built up. This network is used to calculate the orientation changes of the camera with the temporal and angular resolution of the camera. The approach was tested with an extensive set of aerial surveys covering a wide range of different conditions and achieved precise and reliable results.
Vision-Based Geo-Monitoring - A New Approach for an Automated System
NASA Astrophysics Data System (ADS)
Wagner, A.; Reiterer, A.; Wasmeier, P.; Rieke-Zapp, D.; Wunderlich, T.
2012-04-01
The necessity for monitoring geo-risk areas such as rock slides is growing due to the increasing probability of such events caused by environmental change. Life with threat becomes to a calculable risk by geodetic deformation monitoring. An in-depth monitoring concept with modern measurement technologies allows the estimation of the hazard potential and the prediction of life-threatening situations. The movements can be monitored by sensors, placed in the unstable slope area. In most cases, it is necessary to enter the regions at risk in order to place the sensors and maintain them. Using long-range monitoring systems (e.g. terrestrial laser scanners, total stations, ground based synthetic aperture radar) allows avoiding this risk. To close the gap between the existing low-resolution, medium-accuracy sensors and conventional (co-operative target-based) surveying methods, image-assisted total stations (IATS) are a suggestive solution. IATS offer the user (e.g. metrology expert) an image capturing system (CCD/CMOS camera) in addition to 3D point measurements. The images of the telescope's visual field are projected onto the camera's chip. With appropriate calibration, these images are accurately geo-referenced and oriented since the horizontal and vertical angles of rotation are continuously recorded. The oriented images can directly be used for direction measurements with no need for object control points or further photogrammetric orientation processes. IATS are able to provide high density deformation fields with high accuracy (down to mm range), in all three coordinate directions. Tests have shown that with suitable image processing measurements a precision of 0.05 pixel ± 0.04·σ is possible (which corresponds to 0.03 mgon ± 0.04·σ). These results have to be seen under the consideration that such measurements are image-based only. For measuring in 3D object space the precision of pointing has to be taken into account. IATS can be used in two different ways: (1) combining two measurement systems and measuring object points by spatial intersection, or (2) using one measurement system and combining image-based techniques with the integrated distance measurement unit. Beside the system configuration, the detection of features inside the captured images can be done on the basis of different approaches, e.g. template-, edge-, and/or point-based methods. Our system is able to select a suitable algorithm based on different object characteristics, such as object geometry, texture, behaviour, etc. The long-term objective is the research, development and installation of a fully-automated measurement system, including a data analysis and interpretation component. Acknowledgments: The presented research has been supported by the Alexander von Humboldt Foundation, and by the European Sciences Foundation (ESF).
NASA Astrophysics Data System (ADS)
Weber, V. L.
2018-03-01
We statistically analyze the images of the objects of the "light-line" and "half-plane" types which are observed through a randomly irregular air-water interface. The expressions for the correlation function of fluctuations of the image of an object given in the form of a luminous half-plane are found. The possibility of determining the spatial and temporal correlation functions of the slopes of a rough water surface from these relationships is shown. The problem of the probability of intersection of a small arbitrarily oriented line segment by the contour image of a luminous straight line is solved. Using the results of solving this problem, we show the possibility of determining the values of the curvature variances of a rough water surface. A practical method for obtaining an image of a rectilinear luminous object in the light rays reflected from the rough surface is proposed. It is theoretically shown that such an object can be synthesized by temporal accumulation of the image of a point source of light rapidly moving in the horizontal plane with respect to the water surface.
Effects of the symmetry axis orientation of a TI overburden on seismic images
NASA Astrophysics Data System (ADS)
Chang, Chih-Hsiung; Chang, Young-Fo; Tseng, Cheng-Wei
2017-07-01
In active tectonic regions, the primary formations are often tilted and subjected to the processes of folding and/or faulting. Dipping formations may be categorised as tilted transverse isotropy (TTI). While carrying out hydrocarbon exploration in areas of orogenic structures, mispositioning and defocusing effects in apparent reflections are often caused by the tilted transverse isotropy of the overburden. In this study, scaled physical modelling was carried out to demonstrate the behaviours of seismic wave propagation and imaging problems incurred by transverse isotropic (TI) overburdens that possess different orientations of the symmetry axis. To facilitate our objectives, zero-offset reflections were acquired from four stratum-fault models to image the same structures that were overlain by a TI (phenolite) slab. The symmetry axis of the TI slab was vertical, tilted or horizontal. In response to the symmetry axis orientations, spatial shifts and asymmetrical diffraction patterns in apparent reflections were observed in the acquired profiles. Given the different orientations of the symmetry axis, numerical manipulations showed that the imaged events could be well described by theoretical ray paths computed by the trial-and-error ray method and Fermat's principle (TERF) method. In addition, outputs of image restoration show that the imaging problems, i.e. spatial shift in the apparent reflections, can be properly handled by the ray-based anisotropic 2D Kirchhoff time migration (RAKTM) method.
The Extraction of Post-Earthquake Building Damage Informatiom Based on Convolutional Neural Network
NASA Astrophysics Data System (ADS)
Chen, M.; Wang, X.; Dou, A.; Wu, X.
2018-04-01
The seismic damage information of buildings extracted from remote sensing (RS) imagery is meaningful for supporting relief and effective reduction of losses caused by earthquake. Both traditional pixel-based and object-oriented methods have some shortcoming in extracting information of object. Pixel-based method can't make fully use of contextual information of objects. Object-oriented method faces problem that segmentation of image is not ideal, and the choice of feature space is difficult. In this paper, a new stratage is proposed which combines Convolution Neural Network (CNN) with imagery segmentation to extract building damage information from remote sensing imagery. the key idea of this method includes two steps. First to use CNN to predicate the probability of each pixel and then integrate the probability within each segmentation spot. The method is tested through extracting the collapsed building and uncollapsed building from the aerial image which is acquired in Longtoushan Town after Ms 6.5 Ludian County, Yunnan Province earthquake. The results show that the proposed method indicates its effectiveness in extracting damage information of buildings after earthquake.
Sountsov, Pavel; Santucci, David M; Lisman, John E
2011-01-01
Visual object recognition occurs easily despite differences in position, size, and rotation of the object, but the neural mechanisms responsible for this invariance are not known. We have found a set of transforms that achieve invariance in a neurally plausible way. We find that a transform based on local spatial frequency analysis of oriented segments and on logarithmic mapping, when applied twice in an iterative fashion, produces an output image that is unique to the object and that remains constant as the input image is shifted, scaled, or rotated.
Sountsov, Pavel; Santucci, David M.; Lisman, John E.
2011-01-01
Visual object recognition occurs easily despite differences in position, size, and rotation of the object, but the neural mechanisms responsible for this invariance are not known. We have found a set of transforms that achieve invariance in a neurally plausible way. We find that a transform based on local spatial frequency analysis of oriented segments and on logarithmic mapping, when applied twice in an iterative fashion, produces an output image that is unique to the object and that remains constant as the input image is shifted, scaled, or rotated. PMID:22125522
Band registration of tuneable frame format hyperspectral UAV imagers in complex scenes
NASA Astrophysics Data System (ADS)
Honkavaara, Eija; Rosnell, Tomi; Oliveira, Raquel; Tommaselli, Antonio
2017-12-01
A recent revolution in miniaturised sensor technology has provided markets with novel hyperspectral imagers operating in the frame format principle. In the case of unmanned aerial vehicle (UAV) based remote sensing, the frame format technology is highly attractive in comparison to the commonly utilised pushbroom scanning technology, because it offers better stability and the possibility to capture stereoscopic data sets, bringing an opportunity for 3D hyperspectral object reconstruction. Tuneable filters are one of the approaches for capturing multi- or hyperspectral frame images. The individual bands are not aligned when operating a sensor based on tuneable filters from a mobile platform, such as UAV, because the full spectrum recording is carried out in the time-sequential principle. The objective of this investigation was to study the aspects of band registration of an imager based on tuneable filters and to develop a rigorous and efficient approach for band registration in complex 3D scenes, such as forests. The method first determines the orientations of selected reference bands and reconstructs the 3D scene using structure-from-motion and dense image matching technologies. The bands, without orientation, are then matched to the oriented bands accounting the 3D scene to provide exterior orientations, and afterwards, hyperspectral orthomosaics, or hyperspectral point clouds, are calculated. The uncertainty aspects of the novel approach were studied. An empirical assessment was carried out in a forested environment using hyperspectral images captured with a hyperspectral 2D frame format camera, based on a tuneable Fabry-Pérot interferometer (FPI) on board a multicopter and supported by a high spatial resolution consumer colour camera. A theoretical assessment showed that the method was capable of providing band registration accuracy better than 0.5-pixel size. The empirical assessment proved the performance and showed that, with the novel method, most parts of the band misalignments were less than the pixel size. Furthermore, it was shown that the performance of the band alignment was dependent on the spatial distance from the reference band.
Planetary Image Geometry Library
NASA Technical Reports Server (NTRS)
Deen, Robert C.; Pariser, Oleg
2010-01-01
The Planetary Image Geometry (PIG) library is a multi-mission library used for projecting images (EDRs, or Experiment Data Records) and managing their geometry for in-situ missions. A collection of models describes cameras and their articulation, allowing application programs such as mosaickers, terrain generators, and pointing correction tools to be written in a multi-mission manner, without any knowledge of parameters specific to the supported missions. Camera model objects allow transformation of image coordinates to and from view vectors in XYZ space. Pointing models, specific to each mission, describe how to orient the camera models based on telemetry or other information. Surface models describe the surface in general terms. Coordinate system objects manage the various coordinate systems involved in most missions. File objects manage access to metadata (labels, including telemetry information) in the input EDRs and RDRs (Reduced Data Records). Label models manage metadata information in output files. Site objects keep track of different locations where the spacecraft might be at a given time. Radiometry models allow correction of radiometry for an image. Mission objects contain basic mission parameters. Pointing adjustment ("nav") files allow pointing to be corrected. The object-oriented structure (C++) makes it easy to subclass just the pieces of the library that are truly mission-specific. Typically, this involves just the pointing model and coordinate systems, and parts of the file model. Once the library was developed (initially for Mars Polar Lander, MPL), adding new missions ranged from two days to a few months, resulting in significant cost savings as compared to rewriting all the application programs for each mission. Currently supported missions include Mars Pathfinder (MPF), MPL, Mars Exploration Rover (MER), Phoenix, and Mars Science Lab (MSL). Applications based on this library create the majority of operational image RDRs for those missions. A Java wrapper around the library allows parts of it to be used from Java code (via a native JNI interface). Future conversions of all or part of the library to Java are contemplated.
NASA Astrophysics Data System (ADS)
Cheng, Gong; Han, Junwei; Zhou, Peicheng; Guo, Lei
2014-12-01
The rapid development of remote sensing technology has facilitated us the acquisition of remote sensing images with higher and higher spatial resolution, but how to automatically understand the image contents is still a big challenge. In this paper, we develop a practical and rotation-invariant framework for multi-class geospatial object detection and geographic image classification based on collection of part detectors (COPD). The COPD is composed of a set of representative and discriminative part detectors, where each part detector is a linear support vector machine (SVM) classifier used for the detection of objects or recurring spatial patterns within a certain range of orientation. Specifically, when performing multi-class geospatial object detection, we learn a set of seed-based part detectors where each part detector corresponds to a particular viewpoint of an object class, so the collection of them provides a solution for rotation-invariant detection of multi-class objects. When performing geographic image classification, we utilize a large number of pre-trained part detectors to discovery distinctive visual parts from images and use them as attributes to represent the images. Comprehensive evaluations on two remote sensing image databases and comparisons with some state-of-the-art approaches demonstrate the effectiveness and superiority of the developed framework.
NASA Technical Reports Server (NTRS)
Papanyan, Valeri; Oshle, Edward; Adamo, Daniel
2008-01-01
Measurement of the jettisoned object departure trajectory and velocity vector in the International Space Station (ISS) reference frame is vitally important for prompt evaluation of the object s imminent orbit. We report on the first successful application of photogrammetric analysis of the ISS imagery for the prompt computation of the jettisoned object s position and velocity vectors. As post-EVA analyses examples, we present the Floating Potential Probe (FPP) and the Russian "Orlan" Space Suit jettisons, as well as the near-real-time (provided in several hours after the separation) computations of the Video Stanchion Support Assembly Flight Support Assembly (VSSA-FSA) and Early Ammonia Servicer (EAS) jettisons during the US astronauts space-walk. Standard close-range photogrammetry analysis was used during this EVA to analyze two on-board camera image sequences down-linked from the ISS. In this approach the ISS camera orientations were computed from known coordinates of several reference points on the ISS hardware. Then the position of the jettisoned object for each time-frame was computed from its image in each frame of the video-clips. In another, "quick-look" approach used in near-real time, orientation of the cameras was computed from their position (from the ISS CAD model) and operational data (pan and tilt) then location of the jettisoned object was calculated only for several frames of the two synchronized movies. Keywords: Photogrammetry, International Space Station, jettisons, image analysis.
Homography-based visual servo regulation of mobile robots.
Fang, Yongchun; Dixon, Warren E; Dawson, Darren M; Chawda, Prakash
2005-10-01
A monocular camera-based vision system attached to a mobile robot (i.e., the camera-in-hand configuration) is considered in this paper. By comparing corresponding target points of an object from two different camera images, geometric relationships are exploited to derive a transformation that relates the actual position and orientation of the mobile robot to a reference position and orientation. This transformation is used to synthesize a rotation and translation error system from the current position and orientation to the fixed reference position and orientation. Lyapunov-based techniques are used to construct an adaptive estimate to compensate for a constant, unmeasurable depth parameter, and to prove asymptotic regulation of the mobile robot. The contribution of this paper is that Lyapunov techniques are exploited to craft an adaptive controller that enables mobile robot position and orientation regulation despite the lack of an object model and the lack of depth information. Experimental results are provided to illustrate the performance of the controller.
Perceived object stability depends on multisensory estimates of gravity.
Barnett-Cowan, Michael; Fleming, Roland W; Singh, Manish; Bülthoff, Heinrich H
2011-04-27
How does the brain estimate object stability? Objects fall over when the gravity-projected centre-of-mass lies outside the point or area of support. To estimate an object's stability visually, the brain must integrate information across the shape and compare its orientation to gravity. When observers lie on their sides, gravity is perceived as tilted toward body orientation, consistent with a representation of gravity derived from multisensory information. We exploited this to test whether vestibular and kinesthetic information affect this visual task or whether the brain estimates object stability solely from visual information. In three body orientations, participants viewed images of objects close to a table edge. We measured the critical angle at which each object appeared equally likely to fall over or right itself. Perceived gravity was measured using the subjective visual vertical. The results show that the perceived critical angle was significantly biased in the same direction as the subjective visual vertical (i.e., towards the multisensory estimate of gravity). Our results rule out a general explanation that the brain depends solely on visual heuristics and assumptions about object stability. Instead, they suggest that multisensory estimates of gravity govern the perceived stability of objects, resulting in objects appearing more stable than they are when the head is tilted in the same direction in which they fall.
Iplt--image processing library and toolkit for the electron microscopy community.
Philippsen, Ansgar; Schenk, Andreas D; Stahlberg, Henning; Engel, Andreas
2003-01-01
We present the foundation for establishing a modular, collaborative, integrated, open-source architecture for image processing of electron microscopy images, named iplt. It is designed around object oriented paradigms and implemented using the programming languages C++ and Python. In many aspects it deviates from classical image processing approaches. This paper intends to motivate developers within the community to participate in this on-going project. The iplt homepage can be found at http://www.iplt.org.
NASA Astrophysics Data System (ADS)
Ferwerda, James A.
2013-03-01
We are developing tangible imaging systems1-4 that enable natural interaction with virtual objects. Tangible imaging systems are based on consumer mobile devices that incorporate electronic displays, graphics hardware, accelerometers, gyroscopes, and digital cameras, in laptop or tablet-shaped form-factors. Custom software allows the orientation of a device and the position of the observer to be tracked in real-time. Using this information, realistic images of threedimensional objects with complex textures and material properties are rendered to the screen, and tilting or moving in front of the device produces realistic changes in surface lighting and material appearance. Tangible imaging systems thus allow virtual objects to be observed and manipulated as naturally as real ones with the added benefit that object properties can be modified under user control. In this paper we describe four tangible imaging systems we have developed: the tangiBook - our first implementation on a laptop computer; tangiView - a more refined implementation on a tablet device; tangiPaint - a tangible digital painting application; and phantoView - an application that takes the tangible imaging concept into stereoscopic 3D.
Markant, Julie; Worden, Michael S.; Amso, Dima
2015-01-01
Learning through visual exploration often requires orienting of attention to meaningful information in a cluttered world. Previous work has shown that attention modulates visual cortex activity, with enhanced activity for attended targets and suppressed activity for competing inputs, thus enhancing the visual experience. Here we examined the idea that learning may be engaged differentially with variations in attention orienting mechanisms that drive driving eye movements during visual search and exploration. We hypothesized that attention orienting mechanisms that engaged suppression of a previously attended location will boost memory encoding of the currently attended target objects to a greater extent than those that involve target enhancement alone To test this hypothesis we capitalized on the classic spatial cueing task and the inhibition of return (IOR) mechanism (Posner, Rafal, & Choate, 1985; Posner, 1980) to demonstrate that object images encoded in the context of concurrent suppression at a previously attended location were encoded more effectively and remembered better than those encoded without concurrent suppression. Furthermore, fMRI analyses revealed that this memory benefit was driven by attention modulation of visual cortex activity, as increased suppression of the previously attended location in visual cortex during target object encoding predicted better subsequent recognition memory performance. These results suggest that not all attention orienting impacts learning and memory equally. PMID:25701278
A colour image reproduction framework for 3D colour printing
NASA Astrophysics Data System (ADS)
Xiao, Kaida; Sohiab, Ali; Sun, Pei-li; Yates, Julian M.; Li, Changjun; Wuerger, Sophie
2016-10-01
In this paper, the current technologies in full colour 3D printing technology were introduced. A framework of colour image reproduction process for 3D colour printing is proposed. A special focus was put on colour management for 3D printed objects. Two approaches, colorimetric colour reproduction and spectral based colour reproduction are proposed in order to faithfully reproduce colours in 3D objects. Two key studies, colour reproduction for soft tissue prostheses and colour uniformity correction across different orientations are described subsequently. Results are clear shown that applying proposed colour image reproduction framework, performance of colour reproduction can be significantly enhanced. With post colour corrections, a further improvement in colour process are achieved for 3D printed objects.
exVis: a visual analysis tool for wind tunnel data
NASA Astrophysics Data System (ADS)
Deardorff, D. G.; Keeley, Leslie E.; Uselton, Samuel P.
1998-05-01
exVis is a software tool created to support interactive display and analysis of data collected during wind tunnel experiments. It is a result of a continuing project to explore the uses of information technology in improving the effectiveness of aeronautical design professionals. The data analysis goals are accomplished by allowing aerodynamicists to display and query data collected by new data acquisition systems and to create traditional wind tunnel plots from this data by interactively interrogating these images. exVis was built as a collection of distinct modules to allow for rapid prototyping, to foster evolution of capabilities, and to facilitate object reuse within other applications being developed. It was implemented using C++ and Open Inventor, commercially available object-oriented tools. The initial version was composed of three main classes. Two of these modules are autonomous viewer objects intended to display the test images (ImageViewer) and the plots (GraphViewer). The third main class is the Application User Interface (AUI) which manages the passing of data and events between the viewers, as well as providing a user interface to certain features. User feedback was obtained on a regular basis, which allowed for quick revision cycles and appropriately enhanced feature sets. During the development process additional classes were added, including a color map editor and a data set manager. The ImageViewer module was substantially rewritten to add features and to use the data set manager. The use of an object-oriented design was successful in allowing rapid prototyping and easy feature addition.
NASA Astrophysics Data System (ADS)
Schneider, Uwe; Strack, Ruediger
1992-04-01
apART reflects the structure of an open, distributed environment. According to the general trend in the area of imaging, network-capable, general purpose workstations with capabilities of open system image communication and image input are used. Several heterogeneous components like CCD cameras, slide scanners, and image archives can be accessed. The system is driven by an object-oriented user interface where devices (image sources and destinations), operators (derived from a commercial image processing library), and images (of different data types) are managed and presented uniformly to the user. Browsing mechanisms are used to traverse devices, operators, and images. An audit trail mechanism is offered to record interactive operations on low-resolution image derivatives. These operations are processed off-line on the original image. Thus, the processing of extremely high-resolution raster images is possible, and the performance of resolution dependent operations is enhanced significantly during interaction. An object-oriented database system (APRIL), which can be browsed, is integrated into the system. Attribute retrieval is supported by the user interface. Other essential features of the system include: implementation on top of the X Window System (X11R4) and the OSF/Motif widget set; a SUN4 general purpose workstation, inclusive ethernet, magneto optical disc, etc., as the hardware platform for the user interface; complete graphical-interactive parametrization of all operators; support of different image interchange formats (GIF, TIFF, IIF, etc.); consideration of current IPI standard activities within ISO/IEC for further refinement and extensions.
Vinson, David W.; Abney, Drew H.; Dale, Rick; Matlock, Teenie
2014-01-01
Three decades of research suggests that cognitive simulation of motion is involved in the comprehension of object location, bodily configuration, and linguistic meaning. For example, the remembered location of an object associated with actual or implied motion is typically displaced in the direction of motion. In this paper, two experiments explore context effects in spatial displacement. They provide a novel approach to estimating the remembered location of an implied motion image by employing a cursor-positioning task. Both experiments examine how the remembered spatial location of a person is influenced by subtle differences in implied motion, specifically, by shifting the orientation of the person’s body to face upward or downward, and by pairing the image with motion language that differed on intentionality, fell versus jumped. The results of Experiment 1, a survey-based experiment, suggest that language and body orientation influenced vertical spatial displacement. Results of Experiment 2, a task that used Adobe Flash and Amazon Mechanical Turk, showed consistent effects of body orientation on vertical spatial displacement but no effect of language. Our findings are in line with previous work on spatial displacement that uses a cursor-positioning task with implied motion stimuli. We discuss how different ways of simulating motion can influence spatial memory. PMID:25071628
Vinson, David W; Abney, Drew H; Dale, Rick; Matlock, Teenie
2014-01-01
Three decades of research suggests that cognitive simulation of motion is involved in the comprehension of object location, bodily configuration, and linguistic meaning. For example, the remembered location of an object associated with actual or implied motion is typically displaced in the direction of motion. In this paper, two experiments explore context effects in spatial displacement. They provide a novel approach to estimating the remembered location of an implied motion image by employing a cursor-positioning task. Both experiments examine how the remembered spatial location of a person is influenced by subtle differences in implied motion, specifically, by shifting the orientation of the person's body to face upward or downward, and by pairing the image with motion language that differed on intentionality, fell versus jumped. The results of Experiment 1, a survey-based experiment, suggest that language and body orientation influenced vertical spatial displacement. Results of Experiment 2, a task that used Adobe Flash and Amazon Mechanical Turk, showed consistent effects of body orientation on vertical spatial displacement but no effect of language. Our findings are in line with previous work on spatial displacement that uses a cursor-positioning task with implied motion stimuli. We discuss how different ways of simulating motion can influence spatial memory.
Local surface curvature analysis based on reflection estimation
NASA Astrophysics Data System (ADS)
Lu, Qinglin; Laligant, Olivier; Fauvet, Eric; Zakharova, Anastasia
2015-07-01
In this paper, we propose a novel reflection based method to estimate the local orientation of a specular surface. For a calibrated scene with a fixed light band, the band is reflected by the surface to the image plane of a camera. Then the local geometry between the surface and reflected band is estimated. Firstly, in order to find the relationship relying the object position, the object surface orientation and the band reflection, we study the fundamental theory of the geometry between a specular mirror surface and a band source. Then we extend our approach to the spherical surface with arbitrary curvature. Experiments are conducted with mirror surface and spherical surface. Results show that our method is able to obtain the local surface orientation merely by measuring the displacement and the form of the reflection.
New knowledge in determining the astronomical orientation of Incas object in Ollantaytambo, Peru
NASA Astrophysics Data System (ADS)
Hanzalová, K.; Klokočník, J.; Kostelecký, J.
2014-06-01
This paper deals about astronomical orientation of Incas objects in Ollantaytambo, which is located about 35 km southeast from Machu Picchu, about 40 km northwest from Cusco, and lies in the Urubamba valley. Everybody writing about Ollantaytambo, shoud read Protzen (1993). He devoted his monograph to description and interpretation of that locality. Book of Salazar and Salazar (2005) deals, among others, with the orientation of objects in Ollantaytambo with respect to the cardinal direction. Zawaski and Malville (2007) documented astronomical context of major monuments of nine sites in Peru, including Ollantaytambo. We tested astronomical orientation in these places and confirm or disprove hypothesis about purpose of Incas objects. For assessment orientation of objects we used our measurements and also satellite images on Google Earth and digital elevation model from ASTER. The satellite images used to approximate estimation of astronomical orientation. The digital elevation model is useful in the mountains, where we need the really horizon for a calculation of sunset and sunrise on specific days (solstices), which were for Incas people very important. By Incas is very famous that they worshiped the Sun. According to him they determined when to plant and when to harvest the crop. In this paper we focused on Temple of the Sun, also known the Wall of six monoliths. We tested which astronomical phenomenon is connected with this Temple. First, we tested winter solstice sunrise and the rides of the Pleiades for the epochs 2000, 1500 and 1000 A.D. According with our results the Temple isn't connected neither with winter solstice sunrise nor with the Pleiades. Then we tested also winter solstice sunset. We tried to use the line from an observation point near ruins of the Temple of Sun, to west-north, in direction to sunset. The astronomical azimuth from this point was about 5° less then we need. From this results we found, that is possible to find another observation point. By Salazar and Salazar (2005) we found observation point at the corner (east rectangle) of the pyramid by Pacaritanpu, down by the riverside. There is a line connecting the east rectangular "platform" at the river, going along the Inca road up to vicinity of the Temple of the Sun and then in the direction to the Inca face. Using a digital elevation model we found the astronomical azimuth, which is needed for confirm astronomical orientation of the Temple. So, finally we are able to demonstrate a possibility of the solar-solstice orientation in Ollantaytambo.
Object-oriented crop mapping and monitoring using multi-temporal polarimetric RADARSAT-2 data
NASA Astrophysics Data System (ADS)
Jiao, Xianfeng; Kovacs, John M.; Shang, Jiali; McNairn, Heather; Walters, Dan; Ma, Baoluo; Geng, Xiaoyuan
2014-10-01
The aim of this paper is to assess the accuracy of an object-oriented classification of polarimetric Synthetic Aperture Radar (PolSAR) data to map and monitor crops using 19 RADARSAT-2 fine beam polarimetric (FQ) images of an agricultural area in North-eastern Ontario, Canada. Polarimetric images and field data were acquired during the 2011 and 2012 growing seasons. The classification and field data collection focused on the main crop types grown in the region, which include: wheat, oat, soybean, canola and forage. The polarimetric parameters were extracted with PolSAR analysis using both the Cloude-Pottier and Freeman-Durden decompositions. The object-oriented classification, with a single date of PolSAR data, was able to classify all five crop types with an accuracy of 95% and Kappa of 0.93; a 6% improvement in comparison with linear-polarization only classification. However, the time of acquisition is crucial. The larger biomass crops of canola and soybean were most accurately mapped, whereas the identification of oat and wheat were more variable. The multi-temporal data using the Cloude-Pottier decomposition parameters provided the best classification accuracy compared to the linear polarizations and the Freeman-Durden decomposition parameters. In general, the object-oriented classifications were able to accurately map crop types by reducing the noise inherent in the SAR data. Furthermore, using the crop classification maps we were able to monitor crop growth stage based on a trend analysis of the radar response. Based on field data from canola crops, there was a strong relationship between the phenological growth stage based on the BBCH scale, and the HV backscatter and entropy.
Segmentation of prostate biopsy needles in transrectal ultrasound images
NASA Astrophysics Data System (ADS)
Krefting, Dagmar; Haupt, Barbara; Tolxdorff, Thomas; Kempkensteffen, Carsten; Miller, Kurt
2007-03-01
Prostate cancer is the most common cancer in men. Tissue extraction at different locations (biopsy) is the gold-standard for diagnosis of prostate cancer. These biopsies are commonly guided by transrectal ultrasound imaging (TRUS). Exact location of the extracted tissue within the gland is desired for more specific diagnosis and provides better therapy planning. While the orientation and the position of the needle within clinical TRUS image are limited, the appearing length and visibility of the needle varies strongly. Marker lines are present and tissue inhomogeneities and deflection artefacts may appear. Simple intensity, gradient oder edge-detecting based segmentation methods fail. Therefore a multivariate statistical classificator is implemented. The independent feature model is built by supervised learning using a set of manually segmented needles. The feature space is spanned by common binary object features as size and eccentricity as well as imaging-system dependent features like distance and orientation relative to the marker line. The object extraction is done by multi-step binarization of the region of interest. The ROI is automatically determined at the beginning of the segmentation and marker lines are removed from the images. The segmentation itself is realized by scale-invariant classification using maximum likelihood estimation and Mahalanobis distance as discriminator. The technique presented here could be successfully applied in 94% of 1835 TRUS images from 30 tissue extractions. It provides a robust method for biopsy needle localization in clinical prostate biopsy TRUS images.
Object recognition and pose estimation of planar objects from range data
NASA Technical Reports Server (NTRS)
Pendleton, Thomas W.; Chien, Chiun Hong; Littlefield, Mark L.; Magee, Michael
1994-01-01
The Extravehicular Activity Helper/Retriever (EVAHR) is a robotic device currently under development at the NASA Johnson Space Center that is designed to fetch objects or to assist in retrieving an astronaut who may have become inadvertently de-tethered. The EVAHR will be required to exhibit a high degree of intelligent autonomous operation and will base much of its reasoning upon information obtained from one or more three-dimensional sensors that it will carry and control. At the highest level of visual cognition and reasoning, the EVAHR will be required to detect objects, recognize them, and estimate their spatial orientation and location. The recognition phase and estimation of spatial pose will depend on the ability of the vision system to reliably extract geometric features of the objects such as whether the surface topologies observed are planar or curved and the spatial relationships between the component surfaces. In order to achieve these tasks, three-dimensional sensing of the operational environment and objects in the environment will therefore be essential. One of the sensors being considered to provide image data for object recognition and pose estimation is a phase-shift laser scanner. The characteristics of the data provided by this scanner have been studied and algorithms have been developed for segmenting range images into planar surfaces, extracting basic features such as surface area, and recognizing the object based on the characteristics of extracted features. Also, an approach has been developed for estimating the spatial orientation and location of the recognized object based on orientations of extracted planes and their intersection points. This paper presents some of the algorithms that have been developed for the purpose of recognizing and estimating the pose of objects as viewed by the laser scanner, and characterizes the desirability and utility of these algorithms within the context of the scanner itself, considering data quality and noise.
NASA Astrophysics Data System (ADS)
Inochkin, F. M.; Pozzi, P.; Bezzubik, V. V.; Belashenkov, N. R.
2017-06-01
Superresolution image reconstruction method based on the structured illumination microscopy (SIM) principle with reduced and simplified pattern set is presented. The method described needs only 2 sinusoidal patterns shifted by half a period for each spatial direction of reconstruction, instead of the minimum of 3 for the previously known methods. The method is based on estimating redundant frequency components in the acquired set of modulated images. Digital processing is based on linear operations. When applied to several spatial orientations, the image set can be further reduced to a single pattern for each spatial orientation, complemented by a single non-modulated image for all the orientations. By utilizing this method for the case of two spatial orientations, the total input image set is reduced up to 3 images, providing up to 2-fold improvement in data acquisition time compared to the conventional 3-pattern SIM method. Using the simplified pattern design, the field of view can be doubled with the same number of spatial light modulator raster elements, resulting in a total 4-fold increase in the space-time product. The method requires precise knowledge of the optical transfer function (OTF). The key limitation is the thickness of object layer that scatters or emits light, which requires to be sufficiently small relatively to the lens depth of field. Numerical simulations and experimental results are presented. Experimental results are obtained on the SIM setup with the spatial light modulator based on the 1920x1080 digital micromirror device.
Age effect in generating mental images of buildings but not common objects.
Piccardi, L; Nori, R; Palermo, L; Guariglia, C; Giusberti, F
2015-08-18
Imagining a familiar environment is different from imagining an environmental map and clinical evidence demonstrated the existence of double dissociations in brain-damaged patients due to the contents of mental images. Here, we assessed a large sample of young and old participants by considering their ability to generate different kinds of mental images, namely, buildings or common objects. As buildings are environmental stimuli that have an important role in human navigation, we expected that elderly participants would have greater difficulty in generating images of buildings than common objects. We found that young and older participants differed in generating both buildings and common objects. For young participants there were no differences between buildings and common objects, but older participants found easier to generate common objects than buildings. Buildings are a special type of visual stimuli because in urban environments they are commonly used as landmarks for navigational purposes. Considering that topographical orientation is one of the abilities mostly affected in normal and pathological aging, the present data throw some light on the impaired processes underlying human navigation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Super-resolved Mirau digital holography by structured illumination
NASA Astrophysics Data System (ADS)
Ganjkhani, Yasaman; Charsooghi, Mohammad A.; Akhlaghi, Ehsan A.; Moradi, Ali-Reza
2017-12-01
In this paper, we apply structured illumination toward super-resolved 3D imaging in a common-path digital holography arrangement. Digital holographic microscopy (DHM) provides non-invasive 3D images of transparent samples as well as 3D profiles of reflective surfaces. A compact and vibration-immune arrangement for DHM may be obtained through the use of a Mirau microscope objective. However, high-magnification Mirau objectives have a low working distance and are expensive. Low-magnification ones, on the other hand, suffer from low lateral resolution. Structured illumination has been widely used for resolution improvement of intensity images, but the technique can also be readily applied to DHM. We apply structured illumination to Mirau DHM by implementing successive sinusoidal gratings with different orientations onto a spatial light modulator (SLM) and forming its image on the specimen. Moreover, we show that, instead of different orientations of 1D gratings, alternative single 2D gratings, e.g. checkerboard or hexagonal patterns, can provide resolution enhancement in multiple directions. Our results show a 35% improvement in the resolution power of the DHM. The presented arrangement has the potential to serve as a table-top device for high resolution holographic microscopy.
Transplant Image Processing Technology under Windows into the Platform Based on MiniGUI
NASA Astrophysics Data System (ADS)
Gan, Lan; Zhang, Xu; Lv, Wenya; Yu, Jia
MFC has a large number of digital image processing-related API functions, object-oriented and class mechanisms which provides image processing technology strong support in Windows. But in embedded systems, image processing technology dues to the restrictions of hardware and software do not have the environment of MFC in Windows. Therefore, this paper draws on the experience of image processing technology of Windows and transplants it into MiniGUI embedded systems. The results show that MiniGUI/Embedded graphical user interface applications about image processing which used in embedded image processing system can be good results.
Mitri, F.G.; Davis, B.J.; Greenleaf, J.F.; Fatemi, M.
2010-01-01
Background Permanent prostate brachytherapy (PPB) is a common treatment for early stage prostate cancer. While the modern approach using trans-rectal ultrasound guidance has demonstrated excellent outcome, the efficacy of PPB depends on achieving complete radiation dose coverage of the prostate by obtaining a proper radiation source (seed) distribution. Currently, brachytherapy seed placement is guided by trans-rectal ultrasound imaging and fluoroscopy. A significant percentage of seeds are not detected by trans-rectal ultrasound because certain seed orientations are invisible making accurate intra-operative feedback of radiation dosimetry very difficult, if not impossible. Therefore, intra-operative correction of suboptimal seed distributions cannot easily be done with current methods. Vibro-acoustography (VA) is an imaging modality that is capable of imaging solids at any orientation, and the resulting images are speckle free. Objective and methods The purpose of this study is to compare the capabilities of VA and pulse-echo ultrasound in imaging PPB seeds at various angles and show the sensitivity of detection to seed orientation. In the VA experiment, two intersecting ultrasound beams driven at f1 = 3.00 MHz and f2 = 3.020 MHz respectively were focused on the seeds attached to a latex membrane while the amplitude of the acoustic emission produced at the difference frequency 20 kHz was detected by a low frequency hydrophone. Results Finite element simulations and results of experiments conducted under well-controlled conditions in a water tank on a series of seeds indicate that the seeds can be detected at any orientation with VA, whereas pulse-echo ultrasound is very sensitive to the seed orientation. Conclusion It is concluded that vibro-acoustography is superior to pulse-echo ultrasound for detection of PPB seeds. PMID:18538365
A Java application for tissue section image analysis.
Kamalov, R; Guillaud, M; Haskins, D; Harrison, A; Kemp, R; Chiu, D; Follen, M; MacAulay, C
2005-02-01
The medical industry has taken advantage of Java and Java technologies over the past few years, in large part due to the language's platform-independence and object-oriented structure. As such, Java provides powerful and effective tools for developing tissue section analysis software. The background and execution of this development are discussed in this publication. Object-oriented structure allows for the creation of "Slide", "Unit", and "Cell" objects to simulate the corresponding real-world objects. Different functions may then be created to perform various tasks on these objects, thus facilitating the development of the software package as a whole. At the current time, substantial parts of the initially planned functionality have been implemented. Getafics 1.0 is fully operational and currently supports a variety of research projects; however, there are certain features of the software that currently introduce unnecessary complexity and inefficiency. In the future, we hope to include features that obviate these problems.
Vectorial point spread function and optical transfer function in oblique plane imaging.
Kim, Jeongmin; Li, Tongcang; Wang, Yuan; Zhang, Xiang
2014-05-05
Oblique plane imaging, using remote focusing with a tilted mirror, enables direct two-dimensional (2D) imaging of any inclined plane of interest in three-dimensional (3D) specimens. It can image real-time dynamics of a living sample that changes rapidly or evolves its structure along arbitrary orientations. It also allows direct observations of any tilted target plane in an object of which orientational information is inaccessible during sample preparation. In this work, we study the optical resolution of this innovative wide-field imaging method. Using the vectorial diffraction theory, we formulate the vectorial point spread function (PSF) of direct oblique plane imaging. The anisotropic lateral resolving power caused by light clipping from the tilted mirror is theoretically analyzed for all oblique angles. We show that the 2D PSF in oblique plane imaging is conceptually different from the inclined 2D slice of the 3D PSF in conventional lateral imaging. Vectorial optical transfer function (OTF) of oblique plane imaging is also calculated by the fast Fourier transform (FFT) method to study effects of oblique angles on frequency responses.
Voting based object boundary reconstruction
NASA Astrophysics Data System (ADS)
Tian, Qi; Zhang, Like; Ma, Jingsheng
2005-07-01
A voting-based object boundary reconstruction approach is proposed in this paper. Morphological technique was adopted in many applications for video object extraction to reconstruct the missing pixels. However, when the missing areas become large, the morphological processing cannot bring us good results. Recently, Tensor voting has attracted people"s attention, and it can be used for boundary estimation on curves or irregular trajectories. However, the complexity of saliency tensor creation limits its applications in real-time systems. An alternative approach based on tensor voting is introduced in this paper. Rather than creating saliency tensors, we use a "2-pass" method for orientation estimation. For the first pass, Sobel d*etector is applied on a coarse boundary image to get the gradient map. In the second pass, each pixel puts decreasing weights based on its gradient information, and the direction with maximum weights sum is selected as the correct orientation of the pixel. After the orientation map is obtained, pixels begin linking edges or intersections along their direction. The approach is applied to various video surveillance clips under different conditions, and the experimental results demonstrate significant improvement on the final extracted objects accuracy.
Three-quarter view preference for three-dimensional objects in 8-month-old infants.
Yamashita, Wakayo; Niimi, Ryosuke; Kanazawa, So; Yamaguchi, Masami K; Yokosawa, Kazuhiko
2014-04-04
This study examined infants' visual perception of three-dimensional common objects. It has been reported that human adults perceive object images in a view-dependent manner: three-quarter views are often preferred to other views, and the sensitivity to object orientation is lower for three-quarter views than for other views. We tested whether such characteristics were observed in 6- to 8-month-old infants by measuring their preferential looking behavior. In Experiment 1 we examined 190- to 240-day-olds' sensitivity to orientation change and in Experiment 2 we examined these infants' preferential looking for the three-quarter view. The 240-day-old infants showed a pattern of results similar to adults for some objects, while the 190-day-old infants did not. The 240-day-old infants' perception of object view is (partly) similar to that of adults. These results suggest that human visual perception of three-dimensional objects develops at 6 to 8 months of age.
Rotation-invariant features for multi-oriented text detection in natural images.
Yao, Cong; Zhang, Xin; Bai, Xiang; Liu, Wenyu; Ma, Yi; Tu, Zhuowen
2013-01-01
Texts in natural scenes carry rich semantic information, which can be used to assist a wide range of applications, such as object recognition, image/video retrieval, mapping/navigation, and human computer interaction. However, most existing systems are designed to detect and recognize horizontal (or near-horizontal) texts. Due to the increasing popularity of mobile-computing devices and applications, detecting texts of varying orientations from natural images under less controlled conditions has become an important but challenging task. In this paper, we propose a new algorithm to detect texts of varying orientations. Our algorithm is based on a two-level classification scheme and two sets of features specially designed for capturing the intrinsic characteristics of texts. To better evaluate the proposed method and compare it with the competing algorithms, we generate a comprehensive dataset with various types of texts in diverse real-world scenes. We also propose a new evaluation protocol, which is more suitable for benchmarking algorithms for detecting texts in varying orientations. Experiments on benchmark datasets demonstrate that our system compares favorably with the state-of-the-art algorithms when handling horizontal texts and achieves significantly enhanced performance on variant texts in complex natural scenes.
Markant, Julie; Worden, Michael S; Amso, Dima
2015-04-01
Learning through visual exploration often requires orienting of attention to meaningful information in a cluttered world. Previous work has shown that attention modulates visual cortex activity, with enhanced activity for attended targets and suppressed activity for competing inputs, thus enhancing the visual experience. Here we examined the idea that learning may be engaged differentially with variations in attention orienting mechanisms that drive eye movements during visual search and exploration. We hypothesized that attention orienting mechanisms that engaged suppression of a previously attended location would boost memory encoding of the currently attended target objects to a greater extent than those that involve target enhancement alone. To test this hypothesis we capitalized on the classic spatial cueing task and the inhibition of return (IOR) mechanism (Posner, 1980; Posner, Rafal, & Choate, 1985) to demonstrate that object images encoded in the context of concurrent suppression at a previously attended location were encoded more effectively and remembered better than those encoded without concurrent suppression. Furthermore, fMRI analyses revealed that this memory benefit was driven by attention modulation of visual cortex activity, as increased suppression of the previously attended location in visual cortex during target object encoding predicted better subsequent recognition memory performance. These results suggest that not all attention orienting impacts learning and memory equally. Copyright © 2015 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raj, Sunny; Jha, Sumit Kumar; Pullum, Laura L.
Validating the correctness of human detection vision systems is crucial for safety applications such as pedestrian collision avoidance in autonomous vehicles. The enormous space of possible inputs to such an intelligent system makes it difficult to design test cases for such systems. In this report, we present our tool MAYA that uses an error model derived from a convolutional neural network (CNN) to explore the space of images similar to a given input image, and then tests the correctness of a given human or object detection system on such perturbed images. We demonstrate the capability of our tool on themore » pre-trained Histogram-of-Oriented-Gradients (HOG) human detection algorithm implemented in the popular OpenCV toolset and the Caffe object detection system pre-trained on the ImageNet benchmark. Our tool may serve as a testing resource for the designers of intelligent human and object detection systems.« less
Object-oriented feature-tracking algorithms for SAR images of the marginal ice zone
NASA Technical Reports Server (NTRS)
Daida, Jason; Samadani, Ramin; Vesecky, John F.
1990-01-01
An unsupervised method that chooses and applies the most appropriate tracking algorithm from among different sea-ice tracking algorithms is reported. In contrast to current unsupervised methods, this method chooses and applies an algorithm by partially examining a sequential image pair to draw inferences about what was examined. Based on these inferences the reported method subsequently chooses which algorithm to apply to specific areas of the image pair where that algorithm should work best.
Dark-field hyperspectral X-ray imaging
Egan, Christopher K.; Jacques, Simon D. M.; Connolley, Thomas; Wilson, Matthew D.; Veale, Matthew C.; Seller, Paul; Cernik, Robert J.
2014-01-01
In recent times, there has been a drive to develop non-destructive X-ray imaging techniques that provide chemical or physical insight. To date, these methods have generally been limited; either requiring raster scanning of pencil beams, using narrow bandwidth radiation and/or limited to small samples. We have developed a novel full-field radiographic imaging technique that enables the entire physio-chemical state of an object to be imaged in a single snapshot. The method is sensitive to emitted and scattered radiation, using a spectral imaging detector and polychromatic hard X-radiation, making it particularly useful for studying large dense samples for materials science and engineering applications. The method and its extension to three-dimensional imaging is validated with a series of test objects and demonstrated to directly image the crystallographic preferred orientation and formed precipitates across an aluminium alloy friction stir weld section. PMID:24808753
System for interferometric distortion measurements that define an optical path
Bokor, Jeffrey; Naulleau, Patrick
2003-05-06
An improved phase-shifting point diffraction interferometer can measure both distortion and wavefront aberration. In the preferred embodiment, the interferometer employs an object-plane pinhole array comprising a plurality of object pinholes located between the test optic and the source of electromagnetic radiation and an image-plane mask array that is positioned in the image plane of the test optic. The image-plane mask array comprises a plurality of test windows and corresponding reference pinholes, wherein the positions of the plurality of pinholes in the object-plane pinhole array register with those of the plurality of test windows in image-plane mask array. Electromagnetic radiation that is directed into a first pinhole of object-plane pinhole array thereby creating a first corresponding test beam image on the image-plane mask array. Where distortion is relatively small, it can be directly measured interferometrically by measuring the separation distance between and the orientation of the test beam and reference-beam pinhole and repeating this process for at least one other pinhole of the plurality of pinholes of the object-plane pinhole array. Where the distortion is relative large, it can be measured by using interferometry to direct the stage motion, of a stage supporting the image-plane mask array, and then use the final stage motion as a measure of the distortion.
A semi-automated image analysis procedure for in situ plankton imaging systems.
Bi, Hongsheng; Guo, Zhenhua; Benfield, Mark C; Fan, Chunlei; Ford, Michael; Shahrestani, Suzan; Sieracki, Jeffery M
2015-01-01
Plankton imaging systems are capable of providing fine-scale observations that enhance our understanding of key physical and biological processes. However, processing the large volumes of data collected by imaging systems remains a major obstacle for their employment, and existing approaches are designed either for images acquired under laboratory controlled conditions or within clear waters. In the present study, we developed a semi-automated approach to analyze plankton taxa from images acquired by the ZOOplankton VISualization (ZOOVIS) system within turbid estuarine waters, in Chesapeake Bay. When compared to images under laboratory controlled conditions or clear waters, images from highly turbid waters are often of relatively low quality and more variable, due to the large amount of objects and nonlinear illumination within each image. We first customized a segmentation procedure to locate objects within each image and extracted them for classification. A maximally stable extremal regions algorithm was applied to segment large gelatinous zooplankton and an adaptive threshold approach was developed to segment small organisms, such as copepods. Unlike the existing approaches for images acquired from laboratory, controlled conditions or clear waters, the target objects are often the majority class, and the classification can be treated as a multi-class classification problem. We customized a two-level hierarchical classification procedure using support vector machines to classify the target objects (< 5%), and remove the non-target objects (> 95%). First, histograms of oriented gradients feature descriptors were constructed for the segmented objects. In the first step all non-target and target objects were classified into different groups: arrow-like, copepod-like, and gelatinous zooplankton. Each object was passed to a group-specific classifier to remove most non-target objects. After the object was classified, an expert or non-expert then manually removed the non-target objects that could not be removed by the procedure. The procedure was tested on 89,419 images collected in Chesapeake Bay, and results were consistent with visual counts with >80% accuracy for all three groups.
A Semi-Automated Image Analysis Procedure for In Situ Plankton Imaging Systems
Bi, Hongsheng; Guo, Zhenhua; Benfield, Mark C.; Fan, Chunlei; Ford, Michael; Shahrestani, Suzan; Sieracki, Jeffery M.
2015-01-01
Plankton imaging systems are capable of providing fine-scale observations that enhance our understanding of key physical and biological processes. However, processing the large volumes of data collected by imaging systems remains a major obstacle for their employment, and existing approaches are designed either for images acquired under laboratory controlled conditions or within clear waters. In the present study, we developed a semi-automated approach to analyze plankton taxa from images acquired by the ZOOplankton VISualization (ZOOVIS) system within turbid estuarine waters, in Chesapeake Bay. When compared to images under laboratory controlled conditions or clear waters, images from highly turbid waters are often of relatively low quality and more variable, due to the large amount of objects and nonlinear illumination within each image. We first customized a segmentation procedure to locate objects within each image and extracted them for classification. A maximally stable extremal regions algorithm was applied to segment large gelatinous zooplankton and an adaptive threshold approach was developed to segment small organisms, such as copepods. Unlike the existing approaches for images acquired from laboratory, controlled conditions or clear waters, the target objects are often the majority class, and the classification can be treated as a multi-class classification problem. We customized a two-level hierarchical classification procedure using support vector machines to classify the target objects (< 5%), and remove the non-target objects (> 95%). First, histograms of oriented gradients feature descriptors were constructed for the segmented objects. In the first step all non-target and target objects were classified into different groups: arrow-like, copepod-like, and gelatinous zooplankton. Each object was passed to a group-specific classifier to remove most non-target objects. After the object was classified, an expert or non-expert then manually removed the non-target objects that could not be removed by the procedure. The procedure was tested on 89,419 images collected in Chesapeake Bay, and results were consistent with visual counts with >80% accuracy for all three groups. PMID:26010260
Vision-based overlay of a virtual object into real scene for designing room interior
NASA Astrophysics Data System (ADS)
Harasaki, Shunsuke; Saito, Hideo
2001-10-01
In this paper, we introduce a geometric registration method for augmented reality (AR) and an application system, interior simulator, in which a virtual (CG) object can be overlaid into a real world space. Interior simulator is developed as an example of an AR application of the proposed method. Using interior simulator, users can visually simulate the location of virtual furniture and articles in the living room so that they can easily design the living room interior without placing real furniture and articles, by viewing from many different locations and orientations in real-time. In our system, two base images of a real world space are captured from two different views for defining a projective coordinate of object 3D space. Then each projective view of a virtual object in the base images are registered interactively. After such coordinate determination, an image sequence of a real world space is captured by hand-held camera with tracking non-metric measured feature points for overlaying a virtual object. Virtual objects can be overlaid onto the image sequence by taking each relationship between the images. With the proposed system, 3D position tracking device, such as magnetic trackers, are not required for the overlay of virtual objects. Experimental results demonstrate that 3D virtual furniture can be overlaid into an image sequence of the scene of a living room nearly at video rate (20 frames per second).
Three-dimensional object recognition based on planar images
NASA Astrophysics Data System (ADS)
Mital, Dinesh P.; Teoh, Eam-Khwang; Au, K. C.; Chng, E. K.
1993-01-01
This paper presents the development and realization of a robotic vision system for the recognition of 3-dimensional (3-D) objects. The system can recognize a single object from among a group of known regular convex polyhedron objects that is constrained to lie on a calibrated flat platform. The approach adopted comprises a series of image processing operations on a single 2-dimensional (2-D) intensity image to derive an image line drawing. Subsequently, a feature matching technique is employed to determine 2-D spatial correspondences of the image line drawing with the model in the database. Besides its identification ability, the system can also provide important position and orientation information of the recognized object. The system was implemented on an IBM-PC AT machine executing at 8 MHz without the 80287 Maths Co-processor. In our overall performance evaluation based on a 600 recognition cycles test, the system demonstrated an accuracy of above 80% with recognition time well within 10 seconds. The recognition time is, however, indirectly dependent on the number of models in the database. The reliability of the system is also affected by illumination conditions which must be clinically controlled as in any industrial robotic vision system.
Quickly updatable hologram images with high performance photorefractive polymer composites
NASA Astrophysics Data System (ADS)
Tsutsumi, Naoto; Kinashi, Kenji; Nonomura, Asato; Sakai, Wataru
2012-02-01
We present here quickly updatable hologram images using high performance photorefractive (PR) polymer composite based on poly(N-vinyl carbazole) (PVCz). PVCz is one of the pioneer materials for photoconductive polymer. PVCz/7- DCST/CzEPA/TNF (44/35/20/1 by wt) gives high diffraction efficiency of 68 % at E = 45 V/μm with fast response speed. Response speed of optical diffraction is the key parameter for real-time 3D holographic display. Key parameter for obtaining quickly updatable hologram images is to control the glass transition temperature lower enough to enhance chromophore orientation. Object image of the reflected coin surface recorded with reference beam at 532 nm (green beam) in the PR polymer composite is simultaneously reconstructed using a red probe beam at 642 nm. Instead of using coin object, object image produced by a computer was displayed on a spatial light modulator (SLM) is used as an object for hologram. Reflected object beam from a SLM interfered with reference beam on PR polymer composite to record a hologram and simultaneously reconstructed by a red probe beam. Movie produced in a computer was recorded as a realtime hologram in the PR polymer composite and simultaneously clearly reconstructed with a video rate.
Gomez-Cardona, Daniel; Cruz-Bastida, Juan Pablo; Li, Ke; Budde, Adam; Hsieh, Jiang; Chen, Guang-Hong
2016-08-01
Noise characteristics of clinical multidetector CT (MDCT) systems can be quantified by the noise power spectrum (NPS). Although the NPS of CT has been extensively studied in the past few decades, the joint impact of the bowtie filter and object position on the NPS has not been systematically investigated. This work studies the interplay of these two factors on the two dimensional (2D) local NPS of a clinical CT system that uses the filtered backprojection algorithm for image reconstruction. A generalized NPS model was developed to account for the impact of the bowtie filter and image object location in the scan field-of-view (SFOV). For a given bowtie filter, image object, and its location in the SFOV, the shape and rotational symmetries of the 2D local NPS were directly computed from the NPS model without going through the image reconstruction process. The obtained NPS was then compared with the measured NPSs from the reconstructed noise-only CT images in both numerical phantom simulation studies and experimental phantom studies using a clinical MDCT scanner. The shape and the associated symmetry of the 2D NPS were classified by borrowing the well-known atomic spectral symbols s, p, and d, which correspond to circular, dumbbell, and cloverleaf symmetries, respectively, of the wave function of electrons in an atom. Finally, simulated bar patterns were embedded into experimentally acquired noise backgrounds to demonstrate the impact of different NPS symmetries on the visual perception of the object. (1) For a central region in a centered cylindrical object, an s-wave symmetry was always present in the NPS, no matter whether the bowtie filter was present or not. In contrast, for a peripheral region in a centered object, the symmetry of its NPS was highly dependent on the bowtie filter, and both p-wave symmetry and d-wave symmetry were observed in the NPS. (2) For a centered region-ofinterest (ROI) in an off-centered object, the symmetry of its NPS was found to be different from that of a peripheral ROI in the centered object, even when the physical positions of the two ROIs relative to the isocenter were the same. (3) The potential clinical impact of the highly anisotropic NPS, caused by the interplay of the bowtie filter and position of the image object, was highlighted in images of specific bar patterns oriented at different angles. The visual perception of the bar patterns was found to be strongly dependent on their orientation. The NPS of CT depends strongly on the bowtie filter and object position. Even if the location of the ROI with respect to the isocenter is fixed, there can be different symmetries in the NPS, which depend on the object position and the size of the bowtie filter. For an isolated off-centered object, the NPS of its CT images cannot be represented by the NPS measured from a centered object.
[Several mechanisms of visual gnosis disorders in local brain lesions].
Meerson, Ia A
1981-01-01
The object of the studies were peculiarities of recognizing visual images by patients with local cerebral lesions under conditions of incomplete sets of the image features, disjunction of the latter, distortion of their spatial arrangement, and unusual spatial orientation of the image as a whole. It was found that elimination of even one essential feature sharply hampered the recognition of the image both by healthy individuals (control), and patients with extraoccipital lesions, whereas elimination of several nonessential features only slowed down the process. In distinction from this the difficulties of the recognition of incomplete images by patients with occipital lesions were directly proportional to the number of the eliminated features irrespective of the latters' significance, i.e. these patients were unable to evaluate the hierarchy of the features. The recognition process in these patients were followed the way of scanning individual features. The reaccumulation and summation. The recognition of the fragmental, spatially distorted and unusually oriented images was found to be affected selectively in patients with parietal lobe affections. The patients with occipital lesions recognized such images practically as good as the ordinary ones.
The Land-Use and Land-Cover Change Analysis in Beijing Huairou in Last Ten Years
NASA Astrophysics Data System (ADS)
Zhao, Q.; Liu, G.; Tu, J.; Wang, Z.
2018-04-01
With eCognition software, the sample-based object-oriented classification method is used. Remote sensing images in Huairou district of Beijing had been classified using remote sensing images of last ten years. According to the results of image processing, the land use types in Huairou district of Beijing were analyzed in the past ten years, and the changes of land use types in Huairou district were obtained, and the reasons for its occurrence were analyzed.
VA's Integrated Imaging System on three platforms.
Dayhoff, R E; Maloney, D L; Majurski, W J
1992-01-01
The DHCP Integrated Imaging System provides users with integrated patient data including text, image and graphics data. This system has been transferred from its original two screen DOS-based MUMPS platform to an X window workstation and a Microsoft Windows-based workstation. There are differences between these various platforms that impact on software design and on software development strategy. Data structures and conventions were used to isolate hardware, operating system, imaging software, and user-interface differences between platforms in the implementation of functionality for text and image display and interaction. The use of an object-oriented approach greatly increased system portability.
VA's Integrated Imaging System on three platforms.
Dayhoff, R. E.; Maloney, D. L.; Majurski, W. J.
1992-01-01
The DHCP Integrated Imaging System provides users with integrated patient data including text, image and graphics data. This system has been transferred from its original two screen DOS-based MUMPS platform to an X window workstation and a Microsoft Windows-based workstation. There are differences between these various platforms that impact on software design and on software development strategy. Data structures and conventions were used to isolate hardware, operating system, imaging software, and user-interface differences between platforms in the implementation of functionality for text and image display and interaction. The use of an object-oriented approach greatly increased system portability. PMID:1482983
Automatic image database generation from CAD for 3D object recognition
NASA Astrophysics Data System (ADS)
Sardana, Harish K.; Daemi, Mohammad F.; Ibrahim, Mohammad K.
1993-06-01
The development and evaluation of Multiple-View 3-D object recognition systems is based on a large set of model images. Due to the various advantages of using CAD, it is becoming more and more practical to use existing CAD data in computer vision systems. Current PC- level CAD systems are capable of providing physical image modelling and rendering involving positional variations in cameras, light sources etc. We have formulated a modular scheme for automatic generation of various aspects (views) of the objects in a model based 3-D object recognition system. These views are generated at desired orientations on the unit Gaussian sphere. With a suitable network file sharing system (NFS), the images can directly be stored on a database located on a file server. This paper presents the image modelling solutions using CAD in relation to multiple-view approach. Our modular scheme for data conversion and automatic image database storage for such a system is discussed. We have used this approach in 3-D polyhedron recognition. An overview of the results, advantages and limitations of using CAD data and conclusions using such as scheme are also presented.
Neural-net-based image matching
NASA Astrophysics Data System (ADS)
Jerebko, Anna K.; Barabanov, Nikita E.; Luciv, Vadim R.; Allinson, Nigel M.
2000-04-01
The paper describes a neural-based method for matching spatially distorted image sets. The matching of partially overlapping images is important in many applications-- integrating information from images formed from different spectral ranges, detecting changes in a scene and identifying objects of differing orientations and sizes. Our approach consists of extracting contour features from both images, describing the contour curves as sets of line segments, comparing these sets, determining the corresponding curves and their common reference points, calculating the image-to-image co-ordinate transformation parameters on the basis of the most successful variant of the derived curve relationships. The main steps are performed by custom neural networks. The algorithms describe in this paper have been successfully tested on a large set of images of the same terrain taken in different spectral ranges, at different seasons and rotated by various angles. In general, this experimental verification indicates that the proposed method for image fusion allows the robust detection of similar objects in noisy, distorted scenes where traditional approaches often fail.
Macaluso, Emiliano; Ogawa, Akitoshi
2018-05-01
Functional imaging studies have associated dorsal and ventral fronto-parietal regions with the control of visuo-spatial attention. Previous studies demonstrated that the activity of both the dorsal and the ventral attention systems can be modulated by many different factors, related both to the stimuli and the task. However, the vast majority of this work utilized stereotyped paradigms with simple and repeated stimuli. This is at odd with any real life situation that instead involve complex combinations of different types of co-occurring signals, thus raising the question of the ecological significance of the previous findings. Here we investigated how the brain responds to task-related and stimulus-related signals using an innovative approach that involved active exploration of a virtual environment. This enabled us to study visuo-spatial orienting in conditions entailing a dynamic and coherent flow of visual signals, to some extent analogous to real life situations. The environment comprised colored/textured spheres and cubes, which allowed us to implement a standard feature-conjunction search task (task-related signals), and included one physically salient object that served to track the processing of stimulus-related signals. The imaging analyses showed that the posterior parietal cortex (PPC) activated when the participants' gaze was directed towards the salient-objects. By contrast, the right inferior partial cortex was associated with the processing of the target-objects and of distractors that shared the target-color and shape, consistent with goal-directed template-matching operations. The study highlights the possibility of combining measures of gaze orienting and functional imaging to investigate the processing of different types of signals during active behavior in complex environments. Copyright © 2017 Elsevier Ltd. All rights reserved.
Image reconstruction of x-ray tomography by using image J platform
NASA Astrophysics Data System (ADS)
Zain, R. M.; Razali, A. M.; Salleh, K. A. M.; Yahya, R.
2017-01-01
A tomogram is a technical term for a CT image. It is also called a slice because it corresponds to what the object being scanned would look like if it were sliced open along a plane. A CT slice corresponds to a certain thickness of the object being scanned. So, while a typical digital image is composed of pixels, a CT slice image is composed of voxels (volume elements). In the case of x-ray tomography, similar to x-ray Radiography, the quantity being imaged is the distribution of the attenuation coefficient μ(x) within the object of interest. The different is only on the technique to produce the tomogram. The image of x-ray radiography can be produced straight foward after exposed to x-ray, while the image of tomography produces by combination of radiography images in every angle of projection. A number of image reconstruction methods by converting x-ray attenuation data into a tomography image have been produced by researchers. In this work, Ramp filter in "filtered back projection" has been applied. The linear data acquired at each angular orientation are convolved with a specially designed filter and then back projected across a pixel field at the same angle. This paper describe the step of using Image J software to produce image reconstruction of x-ray tomography.
Mapping molecular orientational distributions for biological sample in 3D (Conference Presentation)
NASA Astrophysics Data System (ADS)
HE, Wei; Ferrand, Patrick; Richter, Benjamin; Bastmeyer, Martin; Brasselet, Sophie
2016-04-01
Measuring molecular orientation properties is very appealing for scientists in molecular and cell biology, as well as biomedical research. Orientational organization at the molecular scale is indeed an important brick to cells and tissues morphology, mechanics, functions and pathologies. Recent work has shown that polarized fluorescence imaging, based on excitation polarization tuning in the sample plane, is able to probe molecular orientational order in biological samples; however this applies only to information in 2D, projected in the sample plane. To surpass this limitation, we extended this approach to excitation polarization tuning in 3D. The principle is based on the decomposition of any arbitrary 3D linear excitation in a polarization along the longitudinal z-axis, and a polarization in the transverse xy-sample plane. We designed an interferometer with one arm generating radial polarization light (thus producing longitudinal polarization under high numerical aperture focusing), the other arm controlling a linear polarization in the transverse plane. The amplitude ratio between the two arms can vary so as to get any linear polarized excitation in 3D at the focus of a high NA objective. This technique has been characterized by polarimetry imaging at the back focal plane of the focusing objective, and modeled theoretically. 3D polarized fluorescence microscopy is demonstrated on actin stress fibers in non-flat cells suspended on synthetic polymer structures forming supporting pillars, for which heterogeneous actin orientational order could be identified. This technique shows a great potential in structural investigations in 3D biological systems, such as cell spheroids and tissues.
Feature Extraction for Pose Estimation. A Comparison Between Synthetic and Real IR Imagery
1991-12-01
determine the orientation of the sensor relative to the target ....... ........................ 33 4. Effects of changing sensor and target parameters...Reference object is a T-62 tank facing the viewer (sensor/target parameters set equal to zero). NOTE: Changing the target parameters produces...anomalous results. For these images, the field of view (FOV) was not changed .......................... 35 5. Image anomalies from changing the target
Object oriented classification of high resolution data for inventory of horticultural crops
NASA Astrophysics Data System (ADS)
Hebbar, R.; Ravishankar, H. M.; Trivedi, S.; Subramoniam, S. R.; Uday, R.; Dadhwal, V. K.
2014-11-01
High resolution satellite images are associated with large variance and thus, per pixel classifiers often result in poor accuracy especially in delineation of horticultural crops. In this context, object oriented techniques are powerful and promising methods for classification. In the present study, a semi-automatic object oriented feature extraction model has been used for delineation of horticultural fruit and plantation crops using Erdas Objective Imagine. Multi-resolution data from Resourcesat LISS-IV and Cartosat-1 have been used as source data in the feature extraction model. Spectral and textural information along with NDVI were used as inputs for generation of Spectral Feature Probability (SFP) layers using sample training pixels. The SFP layers were then converted into raster objects using threshold and clump function resulting in pixel probability layer. A set of raster and vector operators was employed in the subsequent steps for generating thematic layer in the vector format. This semi-automatic feature extraction model was employed for classification of major fruit and plantations crops viz., mango, banana, citrus, coffee and coconut grown under different agro-climatic conditions. In general, the classification accuracy of about 75-80 per cent was achieved for these crops using object based classification alone and the same was further improved using minimal visual editing of misclassified areas. A comparison of on-screen visual interpretation with object oriented approach showed good agreement. It was observed that old and mature plantations were classified more accurately while young and recently planted ones (3 years or less) showed poor classification accuracy due to mixed spectral signature, wider spacing and poor stands of plantations. The results indicated the potential use of object oriented approach for classification of high resolution data for delineation of horticultural fruit and plantation crops. The present methodology is applicable at local levels and future development is focused on up-scaling the methodology for generation of fruit and plantation crop maps at regional and national level which is important for creation of database for overall horticultural crop development.
Chang, Chia-Yuan; Lin, Cheng-Han; Lin, Chun-Yu; Sie, Yong-Da; Hu, Yvonne Yuling; Tsai, Sheng-Feng; Chen, Shean-Jen
2018-01-01
A developed temporal focusing-based multiphoton excitation microscope (TFMPEM) has a digital micromirror device (DMD) which is adopted not only as a blazed grating for light spatial dispersion but also for patterned illumination simultaneously. Herein, the TFMPEM has been extended to implement spatially modulated illumination at structured frequency and orientation to increase the beam coverage at the back-focal aperture of the objective lens. The axial excitation confinement (AEC) of TFMPEM can be condensed from 3.0 μm to 1.5 μm for a 50 % improvement. By using the TFMPEM with HiLo technique as two structured illuminations at the same spatial frequency but different orientation, reconstructed biotissue images according to the condensed AEC structured illumination are shown obviously superior in contrast and better scattering suppression. Picture: TPEF images of the eosin-stained mouse cerebellar cortex by conventional TFMPEM (left), and the TFMPEM with HiLo technique as 1.09 μm -1 spatially modulated illumination at 90° (center) and 0° (right) orientations. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Problems and Limitations of Satellite Image Orientation for Determination of Height Models
NASA Astrophysics Data System (ADS)
Jacobsen, K.
2017-05-01
The usual satellite image orientation is based on bias corrected rational polynomial coefficients (RPC). The RPC are describing the direct sensor orientation of the satellite images. The locations of the projection centres today are without problems, but an accuracy limit is caused by the attitudes. Very high resolution satellites today are very agile, able to change the pointed area over 200km within 10 to 11 seconds. The corresponding fast attitude acceleration of the satellite may cause a jitter which cannot be expressed by the third order RPC, even if it is recorded by the gyros. Only a correction of the image geometry may help, but usually this will not be done. The first indication of jitter problems is shown by systematic errors of the y-parallaxes (py) for the intersection of corresponding points during the computation of ground coordinates. These y-parallaxes have a limited influence to the ground coordinates, but similar problems can be expected for the x-parallaxes, determining directly the object height. Systematic y-parallaxes are shown for Ziyuan-3 (ZY3), WorldView-2 (WV2), Pleiades, Cartosat-1, IKONOS and GeoEye. Some of them have clear jitter effects. In addition linear trends of py can be seen. Linear trends in py and tilts in of computed height models may be caused by limited accuracy of the attitude registration, but also by bias correction with affinity transformation. The bias correction is based on ground control points (GCPs). The accuracy of the GCPs usually does not cause some limitations but the identification of the GCPs in the images may be difficult. With 2-dimensional bias corrected RPC-orientation by affinity transformation tilts of the generated height models may be caused, but due to large affine image deformations some satellites, as Cartosat-1, have to be handled with bias correction by affinity transformation. Instead of a 2-dimensional RPC-orientation also a 3-dimensional orientation is possible, respecting the object height more as by 2-dimensional orientation. The 3-dimensional orientation showed advantages for orientation based on a limited number of GCPs, but in case of poor GCP distribution it may cause also negative effects. For some of the used satellites the bias correction by affinity transformation showed advantages, but for some other the bias correction by shift was leading to a better levelling of the generated height models, even if the root mean square (RMS) differences at the GCPs were larger as for bias correction by affinity transformation. The generated height models can be analyzed and corrected with reference height models. For the used data sets accurate reference height models are available, but an analysis and correction with the free of charge available SRTM digital surface model (DSM) or ALOS World 3D (AW3D30) is also possible and leads to similar results. The comparison of the generated height models with the reference DSM shows some height undulations, but the major accuracy influence is caused by tilts of the height models. Some height model undulations reach up to 50 % of the ground sampling distance (GSD), this is not negligible but it cannot be seen not so much at the standard deviations of the height. In any case an improvement of the generated height models is possible with reference height models. If such corrections are applied it compensates possible negative effects of the type of bias correction or 2-dimensional orientations against 3-dimensional handling.
ODIN-object-oriented development interface for NMR.
Jochimsen, Thies H; von Mengershausen, Michael
2004-09-01
A cross-platform development environment for nuclear magnetic resonance (NMR) experiments is presented. It allows rapid prototyping of new pulse sequences and provides a common programming interface for different system types. With this object-oriented interface implemented in C++, the programmer is capable of writing applications to control an experiment that can be executed on different measurement devices, even from different manufacturers, without the need to modify the source code. Due to the clear design of the software, new pulse sequences can be created, tested, and executed within a short time. To post-process the acquired data, an interface to well-known numerical libraries is part of the framework. This allows a transparent integration of the data processing instructions into the measurement module. The software focuses mainly on NMR imaging, but can also be used with limitations for spectroscopic experiments. To demonstrate the capabilities of the framework, results of the same experiment, carried out on two NMR imaging systems from different manufacturers are shown and compared with the results of a simulation.
Koch, Michael; Denzler, Joachim; Redies, Christoph
2010-01-01
Art images and natural scenes have in common that their radially averaged (1D) Fourier spectral power falls according to a power-law with increasing spatial frequency (1/f2 characteristics), which implies that the power spectra have scale-invariant properties. In the present study, we show that other categories of man-made images, cartoons and graphic novels (comics and mangas), have similar properties. Further on, we extend our investigations to 2D power spectra. In order to determine whether the Fourier power spectra of man-made images differed from those of other categories of images (photographs of natural scenes, objects, faces and plants and scientific illustrations), we analyzed their 2D power spectra by principal component analysis. Results indicated that the first fifteen principal components allowed a partial separation of the different image categories. The differences between the image categories were studied in more detail by analyzing whether the mean power and the slope of the power gradients from low to high spatial frequencies varied across orientations in the power spectra. Mean power was generally higher in cardinal orientations both in real-world photographs and artworks, with no systematic difference between the two types of images. However, the slope of the power gradients showed a lower degree of mean variability across spectral orientations (i.e., more isotropy) in art images, cartoons and graphic novels than in photographs of comparable subject matters. Taken together, these results indicate that art images, cartoons and graphic novels possess relatively uniform 1/f2 characteristics across all orientations. In conclusion, the man-made stimuli studied, which were presumably produced to evoke pleasant and/or enjoyable visual perception in human observers, form a subset of all images and share statistical properties in their Fourier power spectra. Whether these properties are necessary or sufficient to induce aesthetic perception remains to be investigated. PMID:20808863
Koch, Michael; Denzler, Joachim; Redies, Christoph
2010-08-19
Art images and natural scenes have in common that their radially averaged (1D) Fourier spectral power falls according to a power-law with increasing spatial frequency (1/f(2) characteristics), which implies that the power spectra have scale-invariant properties. In the present study, we show that other categories of man-made images, cartoons and graphic novels (comics and mangas), have similar properties. Further on, we extend our investigations to 2D power spectra. In order to determine whether the Fourier power spectra of man-made images differed from those of other categories of images (photographs of natural scenes, objects, faces and plants and scientific illustrations), we analyzed their 2D power spectra by principal component analysis. Results indicated that the first fifteen principal components allowed a partial separation of the different image categories. The differences between the image categories were studied in more detail by analyzing whether the mean power and the slope of the power gradients from low to high spatial frequencies varied across orientations in the power spectra. Mean power was generally higher in cardinal orientations both in real-world photographs and artworks, with no systematic difference between the two types of images. However, the slope of the power gradients showed a lower degree of mean variability across spectral orientations (i.e., more isotropy) in art images, cartoons and graphic novels than in photographs of comparable subject matters. Taken together, these results indicate that art images, cartoons and graphic novels possess relatively uniform 1/f(2) characteristics across all orientations. In conclusion, the man-made stimuli studied, which were presumably produced to evoke pleasant and/or enjoyable visual perception in human observers, form a subset of all images and share statistical properties in their Fourier power spectra. Whether these properties are necessary or sufficient to induce aesthetic perception remains to be investigated.
Ball-scale based hierarchical multi-object recognition in 3D medical images
NASA Astrophysics Data System (ADS)
Bağci, Ulas; Udupa, Jayaram K.; Chen, Xinjian
2010-03-01
This paper investigates, using prior shape models and the concept of ball scale (b-scale), ways of automatically recognizing objects in 3D images without performing elaborate searches or optimization. That is, the goal is to place the model in a single shot close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. This is achieved via the following set of key ideas: (a) A semi-automatic way of constructing a multi-object shape model assembly. (b) A novel strategy of encoding, via b-scale, the pose relationship between objects in the training images and their intensity patterns captured in b-scale images. (c) A hierarchical mechanism of positioning the model, in a one-shot way, in a given image from a knowledge of the learnt pose relationship and the b-scale image of the given image to be segmented. The evaluation results on a set of 20 routine clinical abdominal female and male CT data sets indicate the following: (1) Incorporating a large number of objects improves the recognition accuracy dramatically. (2) The recognition algorithm can be thought as a hierarchical framework such that quick replacement of the model assembly is defined as coarse recognition and delineation itself is known as finest recognition. (3) Scale yields useful information about the relationship between the model assembly and any given image such that the recognition results in a placement of the model close to the actual pose without doing any elaborate searches or optimization. (4) Effective object recognition can make delineation most accurate.
The 4-D approach to visual control of autonomous systems
NASA Technical Reports Server (NTRS)
Dickmanns, Ernst D.
1994-01-01
Development of a 4-D approach to dynamic machine vision is described. Core elements of this method are spatio-temporal models oriented towards objects and laws of perspective projection in a foward mode. Integration of multi-sensory measurement data was achieved through spatio-temporal models as invariants for object recognition. Situation assessment and long term predictions were allowed through maintenance of a symbolic 4-D image of processes involving objects. Behavioral capabilities were easily realized by state feedback and feed-foward control.
Nucleus detection using gradient orientation information and linear least squares regression
NASA Astrophysics Data System (ADS)
Kwak, Jin Tae; Hewitt, Stephen M.; Xu, Sheng; Pinto, Peter A.; Wood, Bradford J.
2015-03-01
Computerized histopathology image analysis enables an objective, efficient, and quantitative assessment of digitized histopathology images. Such analysis often requires an accurate and efficient detection and segmentation of histological structures such as glands, cells and nuclei. The segmentation is used to characterize tissue specimens and to determine the disease status or outcomes. The segmentation of nuclei, in particular, is challenging due to the overlapping or clumped nuclei. Here, we propose a nuclei seed detection method for the individual and overlapping nuclei that utilizes the gradient orientation or direction information. The initial nuclei segmentation is provided by a multiview boosting approach. The angle of the gradient orientation is computed and traced for the nuclear boundaries. Taking the first derivative of the angle of the gradient orientation, high concavity points (junctions) are discovered. False junctions are found and removed by adopting a greedy search scheme with the goodness-of-fit statistic in a linear least squares sense. Then, the junctions determine boundary segments. Partial boundary segments belonging to the same nucleus are identified and combined by examining the overlapping area between them. Using the final set of the boundary segments, we generate the list of seeds in tissue images. The method achieved an overall precision of 0.89 and a recall of 0.88 in comparison to the manual segmentation.
Speed skills: measuring the visual speed analyzing properties of primate MT neurons.
Perrone, J A; Thiele, A
2001-05-01
Knowing the direction and speed of moving objects is often critical for survival. However, it is poorly understood how cortical neurons process the speed of image movement. Here we tested MT neurons using moving sine-wave gratings of different spatial and temporal frequencies, and mapped out the neurons' spatiotemporal frequency response profiles. The maps typically had oriented ridges of peak sensitivity as expected for speed-tuned neurons. The preferred speed estimate, derived from the orientation of the maps, corresponded well to the preferred speed when moving bars were presented. Thus, our data demonstrate that MT neurons are truly sensitive to the object speed. These findings indicate that MT is not only a key structure in the analysis of direction of motion and depth perception, but also in the analysis of object speed.
NASA Technical Reports Server (NTRS)
Oommen, Thomas; Rebbapragada, Umaa; Cerminaro, Daniel
2012-01-01
In this study, we perform a case study on imagery from the Haiti earthquake that evaluates a novel object-based approach for characterizing earthquake induced surface effects of liquefaction against a traditional pixel based change technique. Our technique, which combines object-oriented change detection with discriminant/categorical functions, shows the power of distinguishing earthquake-induced surface effects from changes in buildings using the object properties concavity, convexity, orthogonality and rectangularity. Our results suggest that object-based analysis holds promise in automatically extracting earthquake-induced damages from high-resolution aerial/satellite imagery.
NASA Astrophysics Data System (ADS)
Chen, J.; Chen, W.; Dou, A.; Li, W.; Sun, Y.
2018-04-01
A new information extraction method of damaged buildings rooted in optimal feature space is put forward on the basis of the traditional object-oriented method. In this new method, ESP (estimate of scale parameter) tool is used to optimize the segmentation of image. Then the distance matrix and minimum separation distance of all kinds of surface features are calculated through sample selection to find the optimal feature space, which is finally applied to extract the image of damaged buildings after earthquake. The overall extraction accuracy reaches 83.1 %, the kappa coefficient 0.813. The new information extraction method greatly improves the extraction accuracy and efficiency, compared with the traditional object-oriented method, and owns a good promotional value in the information extraction of damaged buildings. In addition, the new method can be used for the information extraction of different-resolution images of damaged buildings after earthquake, then to seek the optimal observation scale of damaged buildings through accuracy evaluation. It is supposed that the optimal observation scale of damaged buildings is between 1 m and 1.2 m, which provides a reference for future information extraction of damaged buildings.
NASA Astrophysics Data System (ADS)
Yang, Y.; Tenenbaum, D. E.
2009-12-01
The process of urbanization has major effects on both human and natural systems. In order to monitor these changes and better understand how urban ecological systems work, urban spatial structure and the variation needs to be first quantified at a fine scale. Because the land-use and land-cover (LULC) in urbanizing areas is highly heterogeneous, the classification of urbanizing environments is the most challenging field in remote sensing. Although a pixel-based method is a common way to do classification, the results are not good enough for many research objectives which require more accurate classification data in fine scales. Transect sampling and object-oriented classification methods are more appropriate for urbanizing areas. Tenenbaum used a transect sampling method using a computer-based facility within a widely available commercial GIS in the Glyndon Catchment and the Upper Baismans Run Catchment, Baltimore, Maryland. It was a two-tiered classification system, including a primary level (which includes 7 classes) and a secondary level (which includes 37 categories). The statistical information of LULC was collected. W. Zhou applied an object-oriented method at the parcel level in Gwynn’s Falls Watershed which includes the two previously mentioned catchments and six classes were extracted. The two urbanizing catchments are located in greater Baltimore, Maryland and drain into Chesapeake Bay. In this research, the two different methods are compared for 6 classes (woody, herbaceous, water, ground, pavement and structure). The comparison method uses the segments in the transect method to extract LULC information from the results of the object-oriented method. Classification results were compared in order to evaluate the difference between the two methods. The overall proportions of LULC classes from the two studies show that there is overestimation of structures in the object-oriented method. For the other five classes, the results from the two methods are similar, except for a difference in the proportions of the woody class. The segment to segment comparison shows that the resolution of the light detection and ranging (LIDAR) data used in the object-oriented method does affect the accuracy of the classification. Shadows of trees and structures are still a big problem in the object-oriented method. For classes that make up a small proportion of the catchments, such as water, neither method was capable of detecting them.
Audible sonar images generated with proprioception for target analysis.
Kuc, Roman B
2017-05-01
Some blind humans have demonstrated the ability to detect and classify objects with echolocation using palatal clicks. An audible-sonar robot mimics human click emissions, binaural hearing, and head movements to extract interaural time and level differences from target echoes. Targets of various complexity are examined by transverse displacements of the sonar and by target pose rotations that model movements performed by the blind. Controlled sonar movements executed by the robot provide data that model proprioception information available to blind humans for examining targets from various aspects. The audible sonar uses this sonar location and orientation information to form two-dimensional target images that are similar to medical diagnostic ultrasound tomograms. Simple targets, such as single round and square posts, produce distinguishable and recognizable images. More complex targets configured with several simple objects generate diffraction effects and multiple reflections that produce image artifacts. The presentation illustrates the capabilities and limitations of target classification from audible sonar images.
Metlagel, Zoltan; Kikkawa, Yayoi S; Kikkawa, Masahide
2007-01-01
Helical image analysis in combination with electron microscopy has been used to study three-dimensional structures of various biological filaments or tubes, such as microtubules, actin filaments, and bacterial flagella. A number of packages have been developed to carry out helical image analysis. Some biological specimens, however, have a symmetry break (seam) in their three-dimensional structure, even though their subunits are mostly arranged in a helical manner. We refer to these objects as "asymmetric helices". All the existing packages are designed for helically symmetric specimens, and do not allow analysis of asymmetric helical objects, such as microtubules with seams. Here, we describe Ruby-Helix, a new set of programs for the analysis of "helical" objects with or without a seam. Ruby-Helix is built on top of the Ruby programming language and is the first implementation of asymmetric helical reconstruction for practical image analysis. It also allows easier and semi-automated analysis, performing iterative unbending and accurate determination of the repeat length. As a result, Ruby-Helix enables us to analyze motor-microtubule complexes with higher throughput to higher resolution.
NASA Astrophysics Data System (ADS)
Luo, Aiwen; An, Fengwei; Zhang, Xiangyu; Chen, Lei; Huang, Zunkai; Jürgen Mattausch, Hans
2018-04-01
Feature extraction techniques are a cornerstone of object detection in computer-vision-based applications. The detection performance of vison-based detection systems is often degraded by, e.g., changes in the illumination intensity of the light source, foreground-background contrast variations or automatic gain control from the camera. In order to avoid such degradation effects, we present a block-based L1-norm-circuit architecture which is configurable for different image-cell sizes, cell-based feature descriptors and image resolutions according to customization parameters from the circuit input. The incorporated flexibility in both the image resolution and the cell size for multi-scale image pyramids leads to lower computational complexity and power consumption. Additionally, an object-detection prototype for performance evaluation in 65 nm CMOS implements the proposed L1-norm circuit together with a histogram of oriented gradients (HOG) descriptor and a support vector machine (SVM) classifier. The proposed parallel architecture with high hardware efficiency enables real-time processing, high detection robustness, small chip-core area as well as low power consumption for multi-scale object detection.
Hemispheric dominance during the mental rotation task in patients with schizophrenia.
Chen, Jiu; Yang, Laiqi; Zhao, Jin; Li, Lanlan; Liu, Guangxiong; Ma, Wentao; Zhang, Yan; Wu, Xingqu; Deng, Zihe; Tuo, Ran
2012-04-01
Mental rotation is a spatial representation conversion capability using an imagined object and either object or self-rotation. This capability is impaired in schizophrenia. To provide a more detailed assessment of impaired cognitive functioning in schizophrenia by comparing the electrophysiological profiles of patients with schizophrenia and controls while completing a mental rotation task using both normally-oriented images and mirror images. This electroencephalographic study compared error rates, reaction times and the topographic map of event-related potentials in 32 participants with schizophrenia and 29 healthy controls during mental rotation tasks involving both normal images and mirror images. Among controls the mean error rate and the mean reaction time for normal images and mirror images were not significantly different but in the patient group the mean (sd) error rate was higher for mirror images than for normal images (42% [6%] vs. 32% [9%], t=2.64, p=0.031) and the mean reaction time was longer for mirror images than for normal images (587 [11] ms vs. 571 [18] ms, t=2.83, p=0.028). The amplitude of the P500 component at Pz (parietal area), Cz (central area), P3 (left parietal area) and P4 (right parietal area) were significantly lower in the patient group than in the control group for both normal images and mirror images. In both groups the P500 for both the normal and mirror images was significantly higher in the right parietal area (P4) compared with left parietal area (P3). The mental rotation abilities of patients with schizophrenia for both normally-oriented images and mirror images are impaired. Patients with schizophrenia show a diminished left cerebral contribution to the mental rotation task, a more rapid response time, and a differential response to normal images versus mirror images not seen in healthy controls. Specific topographic characteristics of the EEG during mental rotation tasks are potential biomarkers for schizophrenia.
Perception of 3D spatial relations for 3D displays
NASA Astrophysics Data System (ADS)
Rosen, Paul; Pizlo, Zygmunt; Hoffmann, Christoph; Popescu, Voicu S.
2004-05-01
We test perception of 3D spatial relations in 3D images rendered by a 3D display (Perspecta from Actuality Systems) and compare it to that of a high-resolution flat panel display. 3D images provide the observer with such depth cues as motion parallax and binocular disparity. Our 3D display is a device that renders a 3D image by displaying, in rapid succession, radial slices through the scene on a rotating screen. The image is contained in a glass globe and can be viewed from virtually any direction. In the psychophysical experiment several families of 3D objects are used as stimuli: primitive shapes (cylinders and cuboids), and complex objects (multi-story buildings, cars, and pieces of furniture). Each object has at least one plane of symmetry. On each trial an object or its "distorted" version is shown at an arbitrary orientation. The distortion is produced by stretching an object in a random direction by 40%. This distortion must eliminate the symmetry of an object. The subject's task is to decide whether or not the presented object is distorted under several viewing conditions (monocular/binocular, with/without motion parallax, and near/far). The subject's performance is measured by the discriminability d', which is a conventional dependent variable in signal detection experiments.
Validation of a stereo camera system to quantify brain deformation due to breathing and pulsatility.
Faria, Carlos; Sadowsky, Ofri; Bicho, Estela; Ferrigno, Giancarlo; Joskowicz, Leo; Shoham, Moshe; Vivanti, Refael; De Momi, Elena
2014-11-01
A new stereo vision system is presented to quantify brain shift and pulsatility in open-skull neurosurgeries. The system is endowed with hardware and software synchronous image acquisition with timestamp embedding in the captured images, a brain surface oriented feature detection, and a tracking subroutine robust to occlusions and outliers. A validation experiment for the stereo vision system was conducted against a gold-standard optical tracking system, Optotrak CERTUS. A static and dynamic analysis of the stereo camera tracking error was performed tracking a customized object in different positions, orientations, linear, and angular speeds. The system is able to detect an immobile object position and orientation with a maximum error of 0.5 mm and 1.6° in all depth of field, and tracking a moving object until 3 mm/s with a median error of 0.5 mm. Three stereo video acquisitions were recorded from a patient, immediately after the craniotomy. The cortical pulsatile motion was captured and is represented in the time and frequency domain. The amplitude of motion of the cloud of features' center of mass was inferior to 0.8 mm. Three distinct peaks are identified in the fast Fourier transform analysis related to the sympathovagal balance, breathing, and blood pressure with 0.03-0.05, 0.2, and 1 Hz, respectively. The stereo vision system presented is a precise and robust system to measure brain shift and pulsatility with an accuracy superior to other reported systems.
NASA Astrophysics Data System (ADS)
Babayan, Pavel; Smirnov, Sergey; Strotov, Valery
2017-10-01
This paper describes the aerial object recognition algorithm for on-board and stationary vision system. Suggested algorithm is intended to recognize the objects of a specific kind using the set of the reference objects defined by 3D models. The proposed algorithm based on the outer contour descriptor building. The algorithm consists of two stages: learning and recognition. Learning stage is devoted to the exploring of reference objects. Using 3D models we can build the database containing training images by rendering the 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. Gathered training image set is used for calculating descriptors, which will be used in the recognition stage of the algorithm. The recognition stage is focusing on estimating the similarity of the captured object and the reference objects by matching an observed image descriptor and the reference object descriptors. The experimental research was performed using a set of the models of the aircraft of the different types (airplanes, helicopters, UAVs). The proposed orientation estimation algorithm showed good accuracy in all case studies. The real-time performance of the algorithm in FPGA-based vision system was demonstrated.
NASA Astrophysics Data System (ADS)
Teutsch, Michael; Saur, Günter
2011-11-01
Spaceborne SAR imagery offers high capability for wide-ranging maritime surveillance especially in situations, where AIS (Automatic Identification System) data is not available. Therefore, maritime objects have to be detected and optional information such as size, orientation, or object/ship class is desired. In recent research work, we proposed a SAR processing chain consisting of pre-processing, detection, segmentation, and classification for single-polarimetric (HH) TerraSAR-X StripMap images to finally assign detection hypotheses to class "clutter", "non-ship", "unstructured ship", or "ship structure 1" (bulk carrier appearance) respectively "ship structure 2" (oil tanker appearance). In this work, we extend the existing processing chain and are now able to handle full-polarimetric (HH, HV, VH, VV) TerraSAR-X data. With the possibility of better noise suppression using the different polarizations, we slightly improve both the segmentation and the classification process. In several experiments we demonstrate the potential benefit for segmentation and classification. Precision of size and orientation estimation as well as correct classification rates are calculated individually for single- and quad-polarization and compared to each other.
Automatic Calibration of Stereo-Cameras Using Ordinary Chess-Board Patterns
NASA Astrophysics Data System (ADS)
Prokos, A.; Kalisperakis, I.; Petsa, E.; Karras, G.
2012-07-01
Automation of camera calibration is facilitated by recording coded 2D patterns. Our toolbox for automatic camera calibration using images of simple chess-board patterns is freely available on the Internet. But it is unsuitable for stereo-cameras whose calibration implies recovering camera geometry and their true-to-scale relative orientation. In contrast to all reported methods requiring additional specific coding to establish an object space coordinate system, a toolbox for automatic stereo-camera calibration relying on ordinary chess-board patterns is presented here. First, the camera calibration algorithm is applied to all image pairs of the pattern to extract nodes of known spacing, order them in rows and columns, and estimate two independent camera parameter sets. The actual node correspondences on stereo-pairs remain unknown. Image pairs of a textured 3D scene are exploited for finding the fundamental matrix of the stereo-camera by applying RANSAC to point matches established with the SIFT algorithm. A node is then selected near the centre of the left image; its match on the right image is assumed as the node closest to the corresponding epipolar line. This yields matches for all nodes (since these have already been ordered), which should also satisfy the 2D epipolar geometry. Measures for avoiding mismatching are taken. With automatically estimated initial orientation values, a bundle adjustment is performed constraining all pairs on a common (scaled) relative orientation. Ambiguities regarding the actual exterior orientations of the stereo-camera with respect to the pattern are irrelevant. Results from this automatic method show typical precisions not above 1/4 pixels for 640×480 web cameras.
A resolution measure for three-dimensional microscopy
Chao, Jerry; Ram, Sripad; Abraham, Anish V.; Ward, E. Sally; Ober, Raimund J.
2009-01-01
A three-dimensional (3D) resolution measure for the conventional optical microscope is introduced which overcomes the drawbacks of the classical 3D (axial) resolution limit. Formulated within the context of a parameter estimation problem and based on the Cramer-Rao lower bound, this 3D resolution measure indicates the accuracy with which a given distance between two objects in 3D space can be determined from the acquired image. It predicts that, given enough photons from the objects of interest, arbitrarily small distances of separation can be estimated with prespecified accuracy. Using simulated images of point source pairs, we show that the maximum likelihood estimator is capable of attaining the accuracy predicted by the resolution measure. We also demonstrate how different factors, such as extraneous noise sources and the spatial orientation of the imaged object pair, can affect the accuracy with which a given distance of separation can be determined. PMID:20161040
Evaluation of image deblurring methods via a classification metric
NASA Astrophysics Data System (ADS)
Perrone, Daniele; Humphreys, David; Lamb, Robert A.; Favaro, Paolo
2012-09-01
The performance of single image deblurring algorithms is typically evaluated via a certain discrepancy measure between the reconstructed image and the ideal sharp image. The choice of metric, however, has been a source of debate and has also led to alternative metrics based on human visual perception. While fixed metrics may fail to capture some small but visible artifacts, perception-based metrics may favor reconstructions with artifacts that are visually pleasant. To overcome these limitations, we propose to assess the quality of reconstructed images via a task-driven metric. In this paper we consider object classification as the task and therefore use the rate of classification as the metric to measure deblurring performance. In our evaluation we use data with different types of blur in two cases: Optical Character Recognition (OCR), where the goal is to recognise characters in a black and white image, and object classification with no restrictions on pose, illumination and orientation. Finally, we show how off-the-shelf classification algorithms benefit from working with deblurred images.
Robust image matching via ORB feature and VFC for mismatch removal
NASA Astrophysics Data System (ADS)
Ma, Tao; Fu, Wenxing; Fang, Bin; Hu, Fangyu; Quan, Siwen; Ma, Jie
2018-03-01
Image matching is at the base of many image processing and computer vision problems, such as object recognition or structure from motion. Current methods rely on good feature descriptors and mismatch removal strategies for detection and matching. In this paper, we proposed a robust image match approach based on ORB feature and VFC for mismatch removal. ORB (Oriented FAST and Rotated BRIEF) is an outstanding feature, it has the same performance as SIFT with lower cost. VFC (Vector Field Consensus) is a state-of-the-art mismatch removing method. The experiment results demonstrate that our method is efficient and robust.
Development of a DICOM library
NASA Astrophysics Data System (ADS)
Kim, Dongsun; Shin, Dongkyu M.; Kim, Dongyoun M.
2001-08-01
Object-oriented DICOM decoding library was developed as a type of DLL for MS-Windows environment development. It supports all DICOM standard Transfer Syntaxes, multi-frame images, RLE decoding and window level adjusting. Image library for medical application was also developed as a type of DLL and ActiveX Control using proposed DICOM library. It supports display of DICOM image, cine mode and basic manipulations. For an application of a proposed image library, a couple of DICOM viewers were developed. One can be used as an off-line DICOM Workstation, and the other can be used for browsing the local DICOM files.
JView Visualization for Next Generation Air Transportation System
2011-01-01
hardware graphics acceleration. JView relies on concrete Object Oriented Design (OOD) and programming techniques to provide a robust and venue non...visibility priority of a texture set. A good example of this is you have translucent images that should always be visible over the other textures...elements present in the scene. • Capture Alpha. Allows the alpha color channel ( translucency ) to be saved when capturing images or movies of a 3D scene
Malamy, Jocelyn; Shribak, Michael
2017-01-01
Epithelial cell dynamics can be difficult to study in intact animals or tissues. Here we use the medusa form of the hydrozoan Clytia hemisphaerica, which is covered with a monolayer of epithelial cells, to test the efficacy of an orientation-independent differential interference contrast (OI-DIC) microscope for in vivo imaging of wound healing. OI-DIC provides an unprecedented resolution phase image of epithelial cells closing a wound in a live, non-transgenic animal model. In particular, the OI-DIC microscope equipped with a 40×/0.75NA objective lens and using the illumination light with wavelength 546 nm demonstrated a resolution of 460 nm. The repair of individual cells, the adhesion of cells to close a gap, and the concomitant contraction of these cells during closure is clearly visualized. PMID:29345317
Dayhoff, R E; Maloney, D L; Kenney, T J; Fletcher, R D
1991-01-01
The VA's hospital information system, the Decentralized Hospital Computer Program (DHCP), is an integrated system based on a powerful set of software tools with shared data accessible from any of its application modules. It includes many functionally specific application subsystems such as laboratory, pharmacy, radiology, and dietetics. Physicians need applications that cross these application boundaries to provide useful and convenient patient data. One of these multi-specialty applications, the DHCP Imaging System, integrates multimedia data to provide clinicians with comprehensive patient-oriented information. User requirements for cross-disciplinary image access can be studied to define needs for similar text data access. Integration approaches must be evaluated both for their ability to deliver patient-oriented text data rapidly and their ability to integrate multimedia data objects. Several potential integration approaches are described as they relate to the DHCP Imaging System.
Dayhoff, R. E.; Maloney, D. L.; Kenney, T. J.; Fletcher, R. D.
1991-01-01
The VA's hospital information system, the Decentralized Hospital Computer Program (DHCP), is an integrated system based on a powerful set of software tools with shared data accessible from any of its application modules. It includes many functionally specific application subsystems such as laboratory, pharmacy, radiology, and dietetics. Physicians need applications that cross these application boundaries to provide useful and convenient patient data. One of these multi-specialty applications, the DHCP Imaging System, integrates multimedia data to provide clinicians with comprehensive patient-oriented information. User requirements for cross-disciplinary image access can be studied to define needs for similar text data access. Integration approaches must be evaluated both for their ability to deliver patient-oriented text data rapidly and their ability to integrate multimedia data objects. Several potential integration approaches are described as they relate to the DHCP Imaging System. PMID:1807651
A group filter algorithm for sea mine detection
NASA Astrophysics Data System (ADS)
Cobb, J. Tory; An, Myoung; Tolimieri, Richard
2005-06-01
Automatic detection of sea mines in coastal regions is a difficult task due to the highly variable sea bottom conditions present in the underwater environment. Detection systems must be able to discriminate objects which vary in size, shape, and orientation from naturally occurring and man-made clutter. Additionally, these automated systems must be computationally efficient to be incorporated into unmanned underwater vehicle (UUV) sensor systems characterized by high sensor data rates and limited processing abilities. Using noncommutative group harmonic analysis, a fast, robust sea mine detection system is created. A family of unitary image transforms associated to noncommutative groups is generated and applied to side scan sonar image files supplied by Naval Surface Warfare Center Panama City (NSWC PC). These transforms project key image features, geometrically defined structures with orientations, and localized spectral information into distinct orthogonal components or feature subspaces of the image. The performance of the detection system is compared against the performance of an independent detection system in terms of probability of detection (Pd) and probability of false alarm (Pfa).
Power spectrum weighted edge analysis for straight edge detection in images
NASA Astrophysics Data System (ADS)
Karvir, Hrishikesh V.; Skipper, Julie A.
2007-04-01
Most man-made objects provide characteristic straight line edges and, therefore, edge extraction is a commonly used target detection tool. However, noisy images often yield broken edges that lead to missed detections, and extraneous edges that may contribute to false target detections. We present a sliding-block approach for target detection using weighted power spectral analysis. In general, straight line edges appearing at a given frequency are represented as a peak in the Fourier domain at a radius corresponding to that frequency, and a direction corresponding to the orientation of the edges in the spatial domain. Knowing the edge width and spacing between the edges, a band-pass filter is designed to extract the Fourier peaks corresponding to the target edges and suppress image noise. These peaks are then detected by amplitude thresholding. The frequency band width and the subsequent spatial filter mask size are variable parameters to facilitate detection of target objects of different sizes under known imaging geometries. Many military objects, such as trucks, tanks and missile launchers, produce definite signatures with parallel lines and the algorithm proves to be ideal for detecting such objects. Moreover, shadow-casting objects generally provide sharp edges and are readily detected. The block operation procedure offers advantages of significant reduction in noise influence, improved edge detection, faster processing speed and versatility to detect diverse objects of different sizes in the image. With Scud missile launcher replicas as target objects, the method has been successfully tested on terrain board test images under different backgrounds, illumination and imaging geometries with cameras of differing spatial resolution and bit-depth.
NASA Astrophysics Data System (ADS)
Selsam, Peter; Schwartze, Christian
2016-10-01
Providing software solutions via internet has been known for quite some time and is now an increasing trend marketed as "software as a service". A lot of business units accept the new methods and streamlined IT strategies by offering web-based infrastructures for external software usage - but geospatial applications featuring very specialized services or functionalities on demand are still rare. Originally applied in desktop environments, the ILMSimage tool for remote sensing image analysis and classification was modified in its communicating structures and enabled for running on a high-power server and benefiting from Tavema software. On top, a GIS-like and web-based user interface guides the user through the different steps in ILMSimage. ILMSimage combines object oriented image segmentation with pattern recognition features. Basic image elements form a construction set to model for large image objects with diverse and complex appearance. There is no need for the user to set up detailed object definitions. Training is done by delineating one or more typical examples (templates) of the desired object using a simple vector polygon. The template can be large and does not need to be homogeneous. The template is completely independent from the segmentation. The object definition is done completely by the software.
Tagare, Hemant D.; Jaffe, C. Carl; Duncan, James
1997-01-01
Abstract Information contained in medical images differs considerably from that residing in alphanumeric format. The difference can be attributed to four characteristics: (1) the semantics of medical knowledge extractable from images is imprecise; (2) image information contains form and spatial data, which are not expressible in conventional language; (3) a large part of image information is geometric; (4) diagnostic inferences derived from images rest on an incomplete, continuously evolving model of normality. This paper explores the differentiating characteristics of text versus images and their impact on design of a medical image database intended to allow content-based indexing and retrieval. One strategy for implementing medical image databases is presented, which employs object-oriented iconic queries, semantics by association with prototypes, and a generic schema. PMID:9147338
Fast and objective detection and analysis of structures in downhole images
NASA Astrophysics Data System (ADS)
Wedge, Daniel; Holden, Eun-Jung; Dentith, Mike; Spadaccini, Nick
2017-09-01
Downhole acoustic and optical televiewer images, and formation microimager (FMI) logs are important datasets for structural and geotechnical analyses for the mineral and petroleum industries. Within these data, dipping planar structures appear as sinusoids, often in incomplete form and in abundance. Their detection is a labour intensive and hence expensive task and as such is a significant bottleneck in data processing as companies may have hundreds of kilometres of logs to process each year. We present an image analysis system that harnesses the power of automated image analysis and provides an interactive user interface to support the analysis of televiewer images by users with different objectives. Our algorithm rapidly produces repeatable, objective results. We have embedded it in an interactive workflow to complement geologists' intuition and experience in interpreting data to improve efficiency and assist, rather than replace the geologist. The main contributions include a new image quality assessment technique for highlighting image areas most suited to automated structure detection and for detecting boundaries of geological zones, and a novel sinusoid detection algorithm for detecting and selecting sinusoids with given confidence levels. Further tools are provided to perform rapid analysis of and further detection of structures e.g. as limited to specific orientations.
Gomez-Cardona, Daniel; Cruz-Bastida, Juan Pablo; Li, Ke; Budde, Adam; Hsieh, Jiang; Chen, Guang-Hong
2016-01-01
Purpose: Noise characteristics of clinical multidetector CT (MDCT) systems can be quantified by the noise power spectrum (NPS). Although the NPS of CT has been extensively studied in the past few decades, the joint impact of the bowtie filter and object position on the NPS has not been systematically investigated. This work studies the interplay of these two factors on the two dimensional (2D) local NPS of a clinical CT system that uses the filtered backprojection algorithm for image reconstruction. Methods: A generalized NPS model was developed to account for the impact of the bowtie filter and image object location in the scan field-of-view (SFOV). For a given bowtie filter, image object, and its location in the SFOV, the shape and rotational symmetries of the 2D local NPS were directly computed from the NPS model without going through the image reconstruction process. The obtained NPS was then compared with the measured NPSs from the reconstructed noise-only CT images in both numerical phantom simulation studies and experimental phantom studies using a clinical MDCT scanner. The shape and the associated symmetry of the 2D NPS were classified by borrowing the well-known atomic spectral symbols s, p, and d, which correspond to circular, dumbbell, and cloverleaf symmetries, respectively, of the wave function of electrons in an atom. Finally, simulated bar patterns were embedded into experimentally acquired noise backgrounds to demonstrate the impact of different NPS symmetries on the visual perception of the object. Results: (1) For a central region in a centered cylindrical object, an s-wave symmetry was always present in the NPS, no matter whether the bowtie filter was present or not. In contrast, for a peripheral region in a centered object, the symmetry of its NPS was highly dependent on the bowtie filter, and both p-wave symmetry and d-wave symmetry were observed in the NPS. (2) For a centered region-ofinterest (ROI) in an off-centered object, the symmetry of its NPS was found to be different from that of a peripheral ROI in the centered object, even when the physical positions of the two ROIs relative to the isocenter were the same. (3) The potential clinical impact of the highly anisotropic NPS, caused by the interplay of the bowtie filter and position of the image object, was highlighted in images of specific bar patterns oriented at different angles. The visual perception of the bar patterns was found to be strongly dependent on their orientation. Conclusions: The NPS of CT depends strongly on the bowtie filter and object position. Even if the location of the ROI with respect to the isocenter is fixed, there can be different symmetries in the NPS, which depend on the object position and the size of the bowtie filter. For an isolated off-centered object, the NPS of its CT images cannot be represented by the NPS measured from a centered object. PMID:27487866
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gomez-Cardona, Daniel; Cruz-Bastida, Juan Pablo
2016-08-15
Purpose: Noise characteristics of clinical multidetector CT (MDCT) systems can be quantified by the noise power spectrum (NPS). Although the NPS of CT has been extensively studied in the past few decades, the joint impact of the bowtie filter and object position on the NPS has not been systematically investigated. This work studies the interplay of these two factors on the two dimensional (2D) local NPS of a clinical CT system that uses the filtered backprojection algorithm for image reconstruction. Methods: A generalized NPS model was developed to account for the impact of the bowtie filter and image object locationmore » in the scan field-of-view (SFOV). For a given bowtie filter, image object, and its location in the SFOV, the shape and rotational symmetries of the 2D local NPS were directly computed from the NPS model without going through the image reconstruction process. The obtained NPS was then compared with the measured NPSs from the reconstructed noise-only CT images in both numerical phantom simulation studies and experimental phantom studies using a clinical MDCT scanner. The shape and the associated symmetry of the 2D NPS were classified by borrowing the well-known atomic spectral symbols s, p, and d, which correspond to circular, dumbbell, and cloverleaf symmetries, respectively, of the wave function of electrons in an atom. Finally, simulated bar patterns were embedded into experimentally acquired noise backgrounds to demonstrate the impact of different NPS symmetries on the visual perception of the object. Results: (1) For a central region in a centered cylindrical object, an s-wave symmetry was always present in the NPS, no matter whether the bowtie filter was present or not. In contrast, for a peripheral region in a centered object, the symmetry of its NPS was highly dependent on the bowtie filter, and both p-wave symmetry and d-wave symmetry were observed in the NPS. (2) For a centered region-ofinterest (ROI) in an off-centered object, the symmetry of its NPS was found to be different from that of a peripheral ROI in the centered object, even when the physical positions of the two ROIs relative to the isocenter were the same. (3) The potential clinical impact of the highly anisotropic NPS, caused by the interplay of the bowtie filter and position of the image object, was highlighted in images of specific bar patterns oriented at different angles. The visual perception of the bar patterns was found to be strongly dependent on their orientation. Conclusions: The NPS of CT depends strongly on the bowtie filter and object position. Even if the location of the ROI with respect to the isocenter is fixed, there can be different symmetries in the NPS, which depend on the object position and the size of the bowtie filter. For an isolated off-centered object, the NPS of its CT images cannot be represented by the NPS measured from a centered object.« less
Improving Image Matching by Reducing Surface Reflections Using Polarising Filter Techniques
NASA Astrophysics Data System (ADS)
Conen, N.; Hastedt, H.; Kahmen, O.; Luhmann, T.
2018-05-01
In dense stereo matching applications surface reflections may lead to incorrect measurements and blunders in the resulting point cloud. To overcome the problem of disturbing reflexions polarising filters can be mounted on the camera lens and light source. Reflections in the images can be suppressed by crossing the polarising direction of the filters leading to homogeneous illuminated images and better matching results. However, the filter may influence the camera's orientation parameters as well as the measuring accuracy. To quantify these effects, a calibration and an accuracy analysis is conducted within a spatial test arrangement according to the German guideline VDI/VDE 2634.1 (2002) using a DSLR with and without polarising filter. In a second test, the interior orientation is analysed in more detail. The results do not show significant changes of the measuring accuracy in object space and only very small changes of the interior orientation (Δc ≤ 4 μm) with the polarising filter in use. Since in medical applications many tiny reflections are present and impede robust surface measurements, a prototypic trinocular endoscope is equipped with polarising technique. The interior and relative orientation is determined and analysed. The advantage of the polarising technique for medical image matching is shown in an experiment with a moistened pig kidney. The accuracy and completeness of the resulting point cloud can be improved clearly when using polarising filters. Furthermore, an accuracy analysis using a laser triangulation system is performed and the special reflection properties of metallic surfaces are presented.
High contrast imaging through adaptive transmittance control in the focal plane
NASA Astrophysics Data System (ADS)
Dhadwal, Harbans S.; Rastegar, Jahangir; Feng, Dake
2016-05-01
High contrast imaging, in the presence of a bright background, is a challenging problem encountered in diverse applications ranging from the daily chore of driving into a sun-drenched scene to in vivo use of biomedical imaging in various types of keyhole surgeries. Imaging in the presence of bright sources saturates the vision system, resulting in loss of scene fidelity, corresponding to low image contrast and reduced resolution. The problem is exacerbated in retro-reflective imaging systems where the light sources illuminating the object are unavoidably strong, typically masking the object features. This manuscript presents a novel theoretical framework, based on nonlinear analysis and adaptive focal plane transmittance, to selectively remove object domain sources of background light from the image plane, resulting in local and global increases in image contrast. The background signal can either be of a global specular nature, giving rise to parallel illumination from the entire object surface or can be represented by a mosaic of randomly orientated, small specular surfaces. The latter is more representative of real world practical imaging systems. Thus, the background signal comprises of groups of oblique rays corresponding to distributions of the mosaic surfaces. Through the imaging system, light from group of like surfaces, converges to a localized spot in the focal plane of the lens and then diverges to cast a localized bright spot in the image plane. Thus, transmittance of a spatial light modulator, positioned in the focal plane, can be adaptively controlled to block a particular source of background light. Consequently, the image plane intensity is entirely due to the object features. Experimental image data is presented to verify the efficacy of the methodology.
NASA Technical Reports Server (NTRS)
Long, S. A. T.
1974-01-01
Formulas are derived for the root-mean-square (rms) displacement, slope, and curvature errors in an azimuth-elevation image trace of an elongated object in space, as functions of the number and spacing of the input data points and the rms elevation error in the individual input data points from a single observation station. Also, formulas are derived for the total rms displacement, slope, and curvature error vectors in the triangulation solution of an elongated object in space due to the rms displacement, slope, and curvature errors, respectively, in the azimuth-elevation image traces from different observation stations. The total rms displacement, slope, and curvature error vectors provide useful measure numbers for determining the relative merits of two or more different triangulation procedures applicable to elongated objects in space.
Metric invariance in object recognition: a review and further evidence.
Cooper, E E; Biederman, I; Hummel, J E
1992-06-01
Phenomenologically, human shape recognition appears to be invariant with changes of orientation in depth (up to parts occlusion), position in the visual field, and size. Recent versions of template theories (e.g., Ullman, 1989; Lowe, 1987) assume that these invariances are achieved through the application of transformations such as rotation, translation, and scaling of the image so that it can be matched metrically to a stored template. Presumably, such transformations would require time for their execution. We describe recent priming experiments in which the effects of a prior brief presentation of an image on its subsequent recognition are assessed. The results of these experiments indicate that the invariance is complete: The magnitude of visual priming (as distinct from name or basic level concept priming) is not affected by a change in position, size, orientation in depth, or the particular lines and vertices present in the image, as long as representations of the same components can be activated. An implemented seven layer neural network model (Hummel & Biederman, 1992) that captures these fundamental properties of human object recognition is described. Given a line drawing of an object, the model activates a viewpoint-invariant structural description of the object, specifying its parts and their interrelations. Visual priming is interpreted as a change in the connection weights for the activation of: a) cells, termed geon feature assemblies (GFAs), that conjoin the output of units that represent invariant, independent properties of a single geon and its relations (such as its type, aspect ratio, relations to other geons), or b) a change in the connection weights by which several GFAs activate a cell representing an object.
Image wavelet decomposition and applications
NASA Technical Reports Server (NTRS)
Treil, N.; Mallat, S.; Bajcsy, R.
1989-01-01
The general problem of computer vision has been investigated for more that 20 years and is still one of the most challenging fields in artificial intelligence. Indeed, taking a look at the human visual system can give us an idea of the complexity of any solution to the problem of visual recognition. This general task can be decomposed into a whole hierarchy of problems ranging from pixel processing to high level segmentation and complex objects recognition. Contrasting an image at different representations provides useful information such as edges. An example of low level signal and image processing using the theory of wavelets is introduced which provides the basis for multiresolution representation. Like the human brain, we use a multiorientation process which detects features independently in different orientation sectors. So, images of the same orientation but of different resolutions are contrasted to gather information about an image. An interesting image representation using energy zero crossings is developed. This representation is shown to be experimentally complete and leads to some higher level applications such as edge and corner finding, which in turn provides two basic steps to image segmentation. The possibilities of feedback between different levels of processing are also discussed.
NASA Astrophysics Data System (ADS)
Markiewicz, J. S.; Kowalczyk, M.; Podlasiak, P.; Bakuła, K.; Zawieska, D.; Bujakiewicz, A.; Andrzejewska, E.
2013-12-01
Due to considerable development of the non - invasion measurement technologies, taking advantages from the distance measurement, the possibility of data acquisition increased and at the same time the measurement period has been reduced. This, by combination of close range laser scanning data and images, enabled the wider expansion of photogrammetric methods effectiveness in registration and analysis of cultural heritage objects. Mentioned integration allows acquisition of objects three - dimensional models and in addition digital image maps - true - ortho and vector products. The quality of photogrammetric products is defined by accuracy and the range of content, therefore by number and the minuteness of detail. That always depends on initial data geometrical resolution. The research results presented in the following paper concern the quality valuation of two products, image of true - ortho and vector data, created for selected parts of architectural object. Source data is represented by point collection i n cloud, acquired from close range laser scanning and photo images. Both data collections has been acquired with diversified resolutions. The exterior orientation of images and several versions of the true - ortho are based on numeric models of the object, acquired with specified resolutions. The comparison of these products gives the opportunity to rate the influence of initial data resolution on their quality (accuracy, information volume). Additional analysis will be performed on the base of vector product s comparison, acquired from monoplotting and true - ortho images. As a conclusion of experiment it was proved that geometric resolution has significant impact on the possibility of generation and on the accuracy of relative orientation TLS scans. If creation of high - resolution products is considered, scanning resolution of about 2 mm should be applied and in case of architecture details - 1 mm. It was also noted that scanning angle and object structure has significant influence on accuracy and completeness of the data. For creation of true - orthoimages for architecture purposes high - resolution ground - based images in geometry close to normal case are recommended to improve their quality. The use of grayscale true - orthoimages with values from scanner intensity is not advised. Presented research proved also that accuracy of manual and automated vectorisation results depend significantly on the resolution of the generated orthoimages (scans and images resolution) and mainly of blur effect and possible pixel size.
Optical correlators for automated rendezvous and capture
NASA Technical Reports Server (NTRS)
Juday, Richard D.
1991-01-01
The paper begins with a description of optical correlation. In this process, the propagation physics of coherent light is used to process images and extract information. The processed image is operated on as an area, rather than as a collection of points. An essentially instantaneous convolution is performed on that image to provide the sensory data. In this process, an image is sensed and encoded onto a coherent wavefront, and the propagation is arranged to create a bright spot of the image to match a model of the desired object. The brightness of the spot provides an indication of the degree of resemblance of the viewed image to the mode, and the location of the bright spot provides pointing information. The process can be utilized for AR&C to achieve the capability to identify objects among known reference types, estimate the object's location and orientation, and interact with the control system. System characteristics (speed, robustness, accuracy, small form factors) are adequate to meet most requirements. The correlator exploits the fact that Bosons and Fermions pass through each other. Since the image source is input as an electronic data set, conventional imagers can be used. In systems where the image is input directly, the correlating element must be at the sensing location.
From tiger to panda: animal head detection.
Zhang, Weiwei; Sun, Jian; Tang, Xiaoou
2011-06-01
Robust object detection has many important applications in real-world online photo processing. For example, both Google image search and MSN live image search have integrated human face detector to retrieve face or portrait photos. Inspired by the success of such face filtering approach, in this paper, we focus on another popular online photo category--animal, which is one of the top five categories in the MSN live image search query log. As a first attempt, we focus on the problem of animal head detection of a set of relatively large land animals that are popular on the internet, such as cat, tiger, panda, fox, and cheetah. First, we proposed a new set of gradient oriented feature, Haar of Oriented Gradients (HOOG), to effectively capture the shape and texture features on animal head. Then, we proposed two detection algorithms, namely Bruteforce detection and Deformable detection, to effectively exploit the shape feature and texture feature simultaneously. Experimental results on 14,379 well labeled animals images validate the superiority of the proposed approach. Additionally, we apply the animal head detector to improve the image search result through text based online photo search result filtering.
Comparative study of bowtie and patient scatter in diagnostic CT
NASA Astrophysics Data System (ADS)
Prakash, Prakhar; Boudry, John M.
2017-03-01
A fast, GPU accelerated Monte Carlo engine for simulating relevant photon interaction processes over the diagnostic energy range in third-generation CT systems was developed to study the relative contributions of bowtie and object scatter to the total scatter reaching an imaging detector. Primary and scattered projections for an elliptical water phantom (major axis set to 300mm) with muscle and fat inserts were simulated for a typical diagnostic CT system as a function of anti-scatter grid (ASG) configurations. The ASG design space explored grid orientation, i.e. septa either a) parallel or b) parallel and perpendicular to the axis of rotation, as well as septa height. The septa material was Tungsten. The resulting projections were reconstructed and the scatter induced image degradation was quantified using common CT image metrics (such as Hounsfield Unit (HU) inaccuracy and loss in contrast), along with a qualitative review of image artifacts. Results indicate object scatter dominates total scatter in the detector channels under the shadow of the imaged object with the bowtie scatter fraction progressively increasing towards the edges of the object projection. Object scatter was shown to be the driving factor behind HU inaccuracy and contrast reduction in the simulated images while shading artifacts and elevated loss in HU accuracy at the object boundary were largely attributed to bowtie scatter. Because the impact of bowtie scatter could not be sufficiently mitigated with a large grid ratio ASG, algorithmic correction may be necessary to further mitigate these artifacts.
Stimulus factors in motion perception and spatial orientation
NASA Technical Reports Server (NTRS)
Post, R. B.; Johnson, C. A.
1984-01-01
The Malcolm horizon utilizes a large projected light stimulus Peripheral Vision Horizon Device (PVHD) as an attitude indicator in order to achieve a more compelling sense of roll than is obtained with smaller devices. The basic principle is that the larger stimulus is more similar to visibility of a real horizon during roll, and does not require fixation and attention to the degree that smaller displays do. Successful implementation of such a device requires adjustment of the parameters of the visual stimulus so that its effects on motion perception and spatial orientation are optimized. With this purpose in mind, the effects of relevant image variables on the perception of object motion, self motion and spatial orientation are reviewed.
NASA Technical Reports Server (NTRS)
Larkin, J. E.; Matthews, K.; Lawrence, C. R.; Graham, J. R.; Harrison, W.; Jernigan, G.; Lin, S.; Nelson, J.; Neugebauer, G.; Smith, G.
1994-01-01
Images of the gravitational lens system MG 1131+0456 taken with the near-infrared camera on the W. M. Keck telescope in the J and K(sub s) bands show that the infrared counterparts of the compact radio structure are exceedingly red, with J - K greater than 4.2 mag. The J image reveals only the lensing galaxy, while the K(sub s) image shows both the lens and the infrared counterparts of the compact radio components. After subtracting the lensing galaxy from the K(sub s) image, the position and orientation of the compact components agree with their radio counterparts. The broad-band spectrum and observed brightness of the lens suggest a giant galaxy at a redshift of approximately 0.75, while the color of the quasar images suggests significant extinction by dust in the lens. There is a significant excess of faint objects within 20 sec of MG 1131+0456. Depending on their mass and redshifts, these objects could complicate the lensing potential considerably.
CRISPRED: CRISP imaging spectropolarimeter data reduction pipeline
NASA Astrophysics Data System (ADS)
de la Cruz Rodríguez, J.; Löfdahl, M. G.; Sütterlin, P.; Hillberg, T.; Rouppe van der Voort, L.
2017-08-01
CRISPRED reduces data from the CRISP imaging spectropolarimeter at the Swedish 1 m Solar Telescope (SST). It performs fitting routines, corrects optical aberrations from atmospheric turbulence as well as from the optics, and compensates for inter-camera misalignments, field-dependent and time-varying instrumental polarization, and spatial variation in the detector gain and in the zero level offset (bias). It has an object-oriented IDL structure with computationally demanding routines performed in C subprograms called as dynamically loadable modules (DLMs).
Knowledge-Based Vision Techniques for the Autonomous Land Vehicle Program
1991-10-01
Knowledge System The CKS is an object-oriented knowledge database that was originally designed to serve as the central information manager for a...34 Representation Space: An Approach to the Integra- tion of Visual Information ," Proc. of DARPA Image Understanding Workshop, Palo Alto, CA, pp. 263-272, May 1989...Strat, " Information Management in a Sensor-Based Au- tonomous System," Proc. DARPA Image Understanding Workshop, University of Southern CA, Vol.1, pp
Image sequence analysis workstation for multipoint motion analysis
NASA Astrophysics Data System (ADS)
Mostafavi, Hassan
1990-08-01
This paper describes an application-specific engineering workstation designed and developed to analyze motion of objects from video sequences. The system combines the software and hardware environment of a modem graphic-oriented workstation with the digital image acquisition, processing and display techniques. In addition to automation and Increase In throughput of data reduction tasks, the objective of the system Is to provide less invasive methods of measurement by offering the ability to track objects that are more complex than reflective markers. Grey level Image processing and spatial/temporal adaptation of the processing parameters is used for location and tracking of more complex features of objects under uncontrolled lighting and background conditions. The applications of such an automated and noninvasive measurement tool include analysis of the trajectory and attitude of rigid bodies such as human limbs, robots, aircraft in flight, etc. The system's key features are: 1) Acquisition and storage of Image sequences by digitizing and storing real-time video; 2) computer-controlled movie loop playback, freeze frame display, and digital Image enhancement; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored Image sequence; 4) model-based estimation and tracking of the six degrees of freedom of a rigid body: 5) field-of-view and spatial calibration: 6) Image sequence and measurement data base management; and 7) offline analysis software for trajectory plotting and statistical analysis.
Laser-Directed Ranging System Implementing Single Camera System for Telerobotics Applications
NASA Technical Reports Server (NTRS)
Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)
1995-01-01
The invention relates generally to systems for determining the range of an object from a reference point and, in one embodiment, to laser-directed ranging systems useful in telerobotics applications. Digital processing techniques are employed which minimize the complexity and cost of the hardware and software for processing range calculations, thereby enhancing the commercial attractiveness of the system for use in relatively low-cost robotic systems. The system includes a video camera for generating images of the target, image digitizing circuitry, and an associated frame grabber circuit. The circuit first captures one of the pairs of stereo video images of the target, and then captures a second video image of the target as it is partly illuminated by the light beam, suitably generated by a laser. The two video images, taken sufficiently close together in time to minimize camera and scene motion, are converted to digital images and then compared. Common pixels are eliminated, leaving only a digital image of the laser-illuminated spot on the target. Mw centroid of the laser illuminated spot is dm obtained and compared with a predetermined reference point, predetermined by design or calibration, which represents the coordinate at the focal plane of the laser illumination at infinite range. Preferably, the laser and camera are mounted on a servo-driven platform which can be oriented to direct the camera and the laser toward the target. In one embodiment the platform is positioned in response to movement of the operator's head. Position and orientation sensors are used to monitor head movement. The disparity between the digital image of the laser spot and the reference point is calculated for determining range to the target. Commercial applications for the system relate to active range-determination systems, such as those used with robotic systems in which it is necessary to determine the, range to a workpiece or object to be grasped or acted upon by a robot arm end-effector in response to commands generated by an operator. In one embodiment, the system provides a real-time image of the target for the operator as the robot approaches the object. The system is also adapted for use in virtual reality systems in which a remote object or workpiece is to be acted upon by a remote robot arm or other mechanism controlled by an operator.
Congruence analysis of point clouds from unstable stereo image sequences
NASA Astrophysics Data System (ADS)
Jepping, C.; Bethmann, F.; Luhmann, T.
2014-06-01
This paper deals with the correction of exterior orientation parameters of stereo image sequences over deformed free-form surfaces without control points. Such imaging situation can occur, for example, during photogrammetric car crash test recordings where onboard high-speed stereo cameras are used to measure 3D surfaces. As a result of such measurements 3D point clouds of deformed surfaces are generated for a complete stereo sequence. The first objective of this research focusses on the development and investigation of methods for the detection of corresponding spatial and temporal tie points within the stereo image sequences (by stereo image matching and 3D point tracking) that are robust enough for a reliable handling of occlusions and other disturbances that may occur. The second objective of this research is the analysis of object deformations in order to detect stable areas (congruence analysis). For this purpose a RANSAC-based method for congruence analysis has been developed. This process is based on the sequential transformation of randomly selected point groups from one epoch to another by using a 3D similarity transformation. The paper gives a detailed description of the congruence analysis. The approach has been tested successfully on synthetic and real image data.
Open-source software platform for medical image segmentation applications
NASA Astrophysics Data System (ADS)
Namías, R.; D'Amato, J. P.; del Fresno, M.
2017-11-01
Segmenting 2D and 3D images is a crucial and challenging problem in medical image analysis. Although several image segmentation algorithms have been proposed for different applications, no universal method currently exists. Moreover, their use is usually limited when detection of complex and multiple adjacent objects of interest is needed. In addition, the continually increasing volumes of medical imaging scans require more efficient segmentation software design and highly usable applications. In this context, we present an extension of our previous segmentation framework which allows the combination of existing explicit deformable models in an efficient and transparent way, handling simultaneously different segmentation strategies and interacting with a graphic user interface (GUI). We present the object-oriented design and the general architecture which consist of two layers: the GUI at the top layer, and the processing core filters at the bottom layer. We apply the framework for segmenting different real-case medical image scenarios on public available datasets including bladder and prostate segmentation from 2D MRI, and heart segmentation in 3D CT. Our experiments on these concrete problems show that this framework facilitates complex and multi-object segmentation goals while providing a fast prototyping open-source segmentation tool.
Vision requirements for Space Station applications
NASA Technical Reports Server (NTRS)
Crouse, K. R.
1985-01-01
Problems which will be encountered by computer vision systems in Space Station operations are discussed, along with solutions be examined at Johnson Space Station. Lighting cannot be controlled in space, nor can the random presence of reflective surfaces. Task-oriented capabilities are to include docking to moving objects, identification of unexpected objects during autonomous flights to different orbits, and diagnoses of damage and repair requirements for autonomous Space Station inspection robots. The approaches being examined to provide these and other capabilities are television IR sensors, advanced pattern recognition programs feeding on data from laser probes, laser radar for robot eyesight and arrays of SMART sensors for automated location and tracking of target objects. Attention is also being given to liquid crystal light valves for optical processing of images for comparisons with on-board electronic libraries of images.
Automatic archaeological feature extraction from satellite VHR images
NASA Astrophysics Data System (ADS)
Jahjah, Munzer; Ulivieri, Carlo
2010-05-01
Archaeological applications need a methodological approach on a variable scale able to satisfy the intra-site (excavation) and the inter-site (survey, environmental research). The increased availability of high resolution and micro-scale data has substantially favoured archaeological applications and the consequent use of GIS platforms for reconstruction of archaeological landscapes based on remotely sensed data. Feature extraction of multispectral remotely sensing image is an important task before any further processing. High resolution remote sensing data, especially panchromatic, is an important input for the analysis of various types of image characteristics; it plays an important role in the visual systems for recognition and interpretation of given data. The methods proposed rely on an object-oriented approach based on a theory for the analysis of spatial structures called mathematical morphology. The term "morphology" stems from the fact that it aims at analysing object shapes and forms. It is mathematical in the sense that the analysis is based on the set theory, integral geometry, and lattice algebra. Mathematical morphology has proven to be a powerful image analysis technique; two-dimensional grey tone images are seen as three-dimensional sets by associating each image pixel with an elevation proportional to its intensity level. An object of known shape and size, called the structuring element, is then used to investigate the morphology of the input set. This is achieved by positioning the origin of the structuring element to every possible position of the space and testing, for each position, whether the structuring element either is included or has a nonempty intersection with the studied set. The shape and size of the structuring element must be selected according to the morphology of the searched image structures. Other two feature extraction techniques were used, eCognition and ENVI module SW, in order to compare the results. These techniques were applied to different archaeological sites in Turkmenistan (Nisa) and in Iraq (Babylon); a further change detection analysis was applied to the Babylon site using two HR images as a pre-post second gulf war. We had different results or outputs, taking into consideration the fact that the operative scale of sensed data determines the final result of the elaboration and the output of the information quality, because each of them was sensitive to specific shapes in each input image, we had mapped linear and nonlinear objects, updating archaeological cartography, automatic change detection analysis for the Babylon site. The discussion of these techniques has the objective to provide the archaeological team with new instruments for the orientation and the planning of a remote sensing application.
Radiology image orientation processing for workstation display
NASA Astrophysics Data System (ADS)
Chang, Chung-Fu; Hu, Kermit; Wilson, Dennis L.
1998-06-01
Radiology images are acquired electronically using phosphor plates that are read in Computed Radiology (CR) readers. An automated radiology image orientation processor (RIOP) for determining the orientation for chest images and for abdomen images has been devised. In addition, the chest images are differentiated as front (AP or PA) or side (Lateral). Using the processing scheme outlined, hospitals will improve the efficiency of quality assurance (QA) technicians who orient images and prepare the images for presentation to the radiologists.
ERIC Educational Resources Information Center
Chapman, Bryan L.
1994-01-01
Discusses the effect of object-oriented programming on the evolution of authoring systems. Topics include the definition of an object; examples of object-oriented authoring interfaces; what object-orientation means to an instructional developer; how object orientation increases productivity and enhances interactivity; and the future of courseware…
NASA Astrophysics Data System (ADS)
Bethmann, F.; Jepping, C.; Luhmann, T.
2013-04-01
This paper reports on a method for the generation of synthetic image data for almost arbitrary static or dynamic 3D scenarios. Image data generation is based on pre-defined 3D objects, object textures, camera orientation data and their imaging properties. The procedure does not focus on the creation of photo-realistic images under consideration of complex imaging and reflection models as they are used by common computer graphics programs. In contrast, the method is designed with main emphasis on geometrically correct synthetic images without radiometric impact. The calculation process includes photogrammetric distortion models, hence cameras with arbitrary geometric imaging characteristics can be applied. Consequently, image sets can be created that are consistent to mathematical photogrammetric models to be used as sup-pixel accurate data for the assessment of high-precision photogrammetric processing methods. In the first instance the paper describes the process of image simulation under consideration of colour value interpolation, MTF/PSF and so on. Subsequently the geometric quality of the synthetic images is evaluated with ellipse operators. Finally, simulated image sets are used to investigate matching and tracking algorithms as they have been developed at IAPG for deformation measurement in car safety testing.
Detection and 3d Modelling of Vehicles from Terrestrial Stereo Image Pairs
NASA Astrophysics Data System (ADS)
Coenen, M.; Rottensteiner, F.; Heipke, C.
2017-05-01
The detection and pose estimation of vehicles plays an important role for automated and autonomous moving objects e.g. in autonomous driving environments. We tackle that problem on the basis of street level stereo images, obtained from a moving vehicle. Processing every stereo pair individually, our approach is divided into two subsequent steps: the vehicle detection and the modelling step. For the detection, we make use of the 3D stereo information and incorporate geometric assumptions on vehicle inherent properties in a firstly applied generic 3D object detection. By combining our generic detection approach with a state of the art vehicle detector, we are able to achieve satisfying detection results with values for completeness and correctness up to more than 86%. By fitting an object specific vehicle model into the vehicle detections, we are able to reconstruct the vehicles in 3D and to derive pose estimations as well as shape parameters for each vehicle. To deal with the intra-class variability of vehicles, we make use of a deformable 3D active shape model learned from 3D CAD vehicle data in our model fitting approach. While we achieve encouraging values up to 67.2% for correct position estimations, we are facing larger problems concerning the orientation estimation. The evaluation is done by using the object detection and orientation estimation benchmark of the KITTI dataset (Geiger et al., 2012).
Research on Remote Sensing Geological Information Extraction Based on Object Oriented Classification
NASA Astrophysics Data System (ADS)
Gao, Hui
2018-04-01
The northern Tibet belongs to the Sub cold arid climate zone in the plateau. It is rarely visited by people. The geological working conditions are very poor. However, the stratum exposures are good and human interference is very small. Therefore, the research on the automatic classification and extraction of remote sensing geological information has typical significance and good application prospect. Based on the object-oriented classification in Northern Tibet, using the Worldview2 high-resolution remote sensing data, combined with the tectonic information and image enhancement, the lithological spectral features, shape features, spatial locations and topological relations of various geological information are excavated. By setting the threshold, based on the hierarchical classification, eight kinds of geological information were classified and extracted. Compared with the existing geological maps, the accuracy analysis shows that the overall accuracy reached 87.8561 %, indicating that the classification-oriented method is effective and feasible for this study area and provides a new idea for the automatic extraction of remote sensing geological information.
NASA Astrophysics Data System (ADS)
Belkacemi, Mohamed; Stolz, Christophe; Mathieu, Alexandre; Lemaitre, Guillaume; Massich, Joan; Aubreton, Olivier
2015-11-01
Today, industries ensure the quality of their manufactured products through computer vision techniques and nonconventional imaging. Three-dimensional (3-D) scanners and nondestructive testing (NDT) systems are commonly used independently for such applications. Furthermore, these approaches combined constitute hybrid systems, providing a 3-D reconstruction and NDT analysis. These systems, however, suffer from drawbacks such as errors during the data fusion and higher cost for manufacturers. In an attempt to solve these problems, a single active thermography system based on scanning-from-heating is proposed in this paper. In addition to 3-D digitization of the object, our contributions are twofold: (1) the nonthrough defect detection for a homogeneous metallic object and (2) fiber orientation assessment for a long fiber composite material. The experiments on steel and aluminum plates show that our method achieves the detection of nonthrough defects. Additionally, the estimation of the fiber orientation is evaluated on carbon-fiber composite material.
NASA Astrophysics Data System (ADS)
Boichenko, Stepan
2018-04-01
We theoretically study laser-scanning confocal fluorescence microscopy using elliptically polarized cylindrical vector excitation light as a tool for visualization of arbitrarily oriented single quantum dipole emitters located (1) near planar surfaces enhancing fluorescence, (2) in a thin supported polymer film, (3) in a freestanding polymer film, and (4) in a dielectric planar microcavity. It is shown analytically that by using a tightly focused azimuthally polarized beam, it is possible to exclude completely the orientational dependence of the image intensity maximum of a quantum emitter that absorbs light as a pair of incoherent independent linear dipoles. For linear dipole quantum emitters, the orientational independence degree higher than 0.9 can normally be achieved (this quantity equal to 1 corresponds to completely excluded orientational dependence) if the collection efficiency of the microscope objective and the emitter's total quantum yield are not strongly orientationally dependent. Thus, the visualization of arbitrarily oriented single quantum emitters by means of the studied technique can be performed quite efficiently.
Mental visualization of objects from cross-sectional images
Wu, Bing; Klatzky, Roberta L.; Stetten, George D.
2011-01-01
We extended the classic anorthoscopic viewing procedure to test a model of visualization of 3D structures from 2D cross-sections. Four experiments were conducted to examine key processes described in the model, localizing cross-sections within a common frame of reference and spatiotemporal integration of cross sections into a hierarchical object representation. Participants used a hand-held device to reveal a hidden object as a sequence of cross-sectional images. The process of localization was manipulated by contrasting two displays, in-situ vs. ex-situ, which differed in whether cross sections were presented at their source locations or displaced to a remote screen. The process of integration was manipulated by varying the structural complexity of target objects and their components. Experiments 1 and 2 demonstrated visualization of 2D and 3D line-segment objects and verified predictions about display and complexity effects. In Experiments 3 and 4, the visualized forms were familiar letters and numbers. Errors and orientation effects showed that displacing cross-sectional images to a remote display (ex-situ viewing) impeded the ability to determine spatial relationships among pattern components, a failure of integration at the object level. PMID:22217386
Brain imaging registry for neurologic diagnosis and research
NASA Astrophysics Data System (ADS)
Hoo, Kent S., Jr.; Wong, Stephen T. C.; Knowlton, Robert C.; Young, Geoffrey S.; Walker, John; Cao, Xinhua; Dillon, William P.; Hawkins, Randall A.; Laxer, Kenneth D.
2002-05-01
The purpose of this paper is to demonstrate the importance of building a brain imaging registry (BIR) on top of existing medical information systems including Picture Archiving Communication Systems (PACS) environment. We describe the design framework for a cluster of data marts whose purpose is to provide clinicians and researchers efficient access to a large volume of raw and processed patient images and associated data originating from multiple operational systems over time and spread out across different hospital departments and laboratories. The framework is designed using object-oriented analysis and design methodology. The BIR data marts each contain complete image and textual data relating to patients with a particular disease.
Extracting built-up areas from TerraSAR-X data using object-oriented classification method
NASA Astrophysics Data System (ADS)
Wang, SuYun; Sun, Z. C.
2017-02-01
Based on single-polarized TerraSAR-X, the approach generates homogeneous segments on an arbitrary number of scale levels by applying a region-growing algorithm which takes the intensity of backscatter and shape-related properties into account. The object-oriented procedure consists of three main steps: firstly, the analysis of the local speckle behavior in the SAR intensity data, leading to the generation of a texture image; secondly, a segmentation based on the intensity image; thirdly, the classification of each segment using the derived texture file and intensity information in order to identify and extract build-up areas. In our research, the distribution of BAs in Dongying City is derived from single-polarized TSX SM image (acquired on 17th June 2013) with average ground resolution of 3m using our proposed approach. By cross-validating the random selected validation points with geo-referenced field sites, Quick Bird high-resolution imagery, confusion matrices with statistical indicators are calculated and used for assessing the classification results. The results demonstrate that an overall accuracy 92.89 and a kappa coefficient of 0.85 could be achieved. We have shown that connect texture information with the analysis of the local speckle divergence, combining texture and intensity of construction extraction is feasible, efficient and rapid.
An adaptive, object oriented strategy for base calling in DNA sequence analysis.
Giddings, M C; Brumley, R L; Haker, M; Smith, L M
1993-01-01
An algorithm has been developed for the determination of nucleotide sequence from data produced in fluorescence-based automated DNA sequencing instruments employing the four-color strategy. This algorithm takes advantage of object oriented programming techniques for modularity and extensibility. The algorithm is adaptive in that data sets from a wide variety of instruments and sequencing conditions can be used with good results. Confidence values are provided on the base calls as an estimate of accuracy. The algorithm iteratively employs confidence determinations from several different modules, each of which examines a different feature of the data for accurate peak identification. Modules within this system can be added or removed for increased performance or for application to a different task. In comparisons with commercial software, the algorithm performed well. Images PMID:8233787
Automated quantification of neurite outgrowth orientation distributions on patterned surfaces
NASA Astrophysics Data System (ADS)
Payne, Matthew; Wang, Dadong; Sinclair, Catriona M.; Kapsa, Robert M. I.; Quigley, Anita F.; Wallace, Gordon G.; Razal, Joselito M.; Baughman, Ray H.; Münch, Gerald; Vallotton, Pascal
2014-08-01
Objective. We have developed an image analysis methodology for quantifying the anisotropy of neuronal projections on patterned substrates. Approach. Our method is based on the fitting of smoothing splines to the digital traces produced using a non-maximum suppression technique. This enables precise estimates of the local tangents uniformly along the neurite length, and leads to unbiased orientation distributions suitable for objectively assessing the anisotropy induced by tailored surfaces. Main results. In our application, we demonstrate that carbon nanotubes arrayed in parallel bundles over gold surfaces induce a considerable neurite anisotropy; a result which is relevant for regenerative medicine. Significance. Our pipeline is generally applicable to the study of fibrous materials on 2D surfaces and should also find applications in the study of DNA, microtubules, and other polymeric materials.
Mladinich, C.
2010-01-01
Human disturbance is a leading ecosystem stressor. Human-induced modifications include transportation networks, areal disturbances due to resource extraction, and recreation activities. High-resolution imagery and object-oriented classification rather than pixel-based techniques have successfully identified roads, buildings, and other anthropogenic features. Three commercial, automated feature-extraction software packages (Visual Learning Systems' Feature Analyst, ENVI Feature Extraction, and Definiens Developer) were evaluated by comparing their ability to effectively detect the disturbed surface patterns from motorized vehicle traffic. Each package achieved overall accuracies in the 70% range, demonstrating the potential to map the surface patterns. The Definiens classification was more consistent and statistically valid. Copyright ?? 2010 by Bellwether Publishing, Ltd. All rights reserved.
European standardization effort: interworking the goal
NASA Astrophysics Data System (ADS)
Mattheus, Rudy A.
1993-09-01
In the European Standardization Committee (CEN), the technical committee responsible for the standardization activities in Medical Informatics (CEN TC 251), has agreed upon the directions of the scopes to follow in this field. They are described in the Directory of the European Standardization Requirements for Healthcare Informatics and Programme for the Development of Standards adopted on 02-28-1991 by CEN/TC 251 and approved by CEN/BT. Top-down objectives describe the common framework and items like terminology, security, more bottom up oriented items describe fields like medical imaging and multi-media. The draft standard is described; the general framework model and object oriented model; the interworking aspects, the relation to ISO standards, and the DICOM proposal. This paper also focuses on all the boundaries in the standardization work, which are also influencing the standardization process.
Self-amplified optical pattern recognition system
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang (Inventor)
1994-01-01
A self amplifying optical pattern recognizer includes a geometric system configuration similar to that of a Vander Lugt holographic matched filter configuration with a photorefractive crystal specifically oriented with respect to the input beams. An extraordinarily polarized, spherically converging object image beam is formed by laser illumination of an input object image and applied through a photorefractive crystal, such as a barium titanite (BaTiO.sub.3) crystal. A volume or thin-film dif ORIGIN OF THE INVENTION The invention described herein was made in the performance of work under a NASA contract, and is subject to the provisions of Public Law 96-517 (35 USC 202) in which the Contractor has elected to retain title.
Effects of Implied Motion and Facing Direction on Positional Preferences in Single-Object Pictures.
Palmer, Stephen E; Langlois, Thomas A
2017-07-01
Palmer, Gardner, and Wickens studied aesthetic preferences for pictures of single objects and found a strong inward bias: Right-facing objects were preferred left-of-center and left-facing objects right-of-center. They found no effect of object motion (people and cars showed the same inward bias as chairs and teapots), but the objects were not depicted as moving. Here we measured analogous inward biases with objects depicted as moving with an implied direction and speed by having participants drag-and-drop target objects into the most aesthetically pleasing position. In Experiment 1, human figures were shown diving or falling while moving forward or backward. Aesthetic biases were evident for both inward-facing and inward-moving figures, but the motion-based bias dominated so strongly that backward divers or fallers were preferred moving inward but facing outward. Experiment 2 investigated implied speed effects using images of humans, horses, and cars moving at different speeds (e.g., standing, walking, trotting, and galloping horses). Inward motion or facing biases were again present, and differences in their magnitude due to speed were evident. Unexpectedly, faster moving objects were generally preferred closer to frame center than slower moving objects. These results are discussed in terms of the combined effects of prospective, future-oriented biases, and retrospective, past-oriented biases.
Purpura, Keith P.; Victor, Jonathan D.
2014-01-01
Segmenting the visual image into objects is a crucial stage of visual processing. Object boundaries are typically associated with differences in luminance, but discontinuities in texture also play an important role. We showed previously that a subpopulation of neurons in V2 in anesthetized macaques responds to orientation discontinuities parallel to their receptive field orientation. Such single-cell responses could be a neurophysiological correlate of texture boundary detection. Neurons in V1, on the other hand, are known to have contextual response modulations such as iso-orientation surround suppression, which also produce responses to orientation discontinuities. Here, we use pseudorandom multiregion grating stimuli of two frame durations (20 and 40 ms) to probe and compare texture boundary responses in V1 and V2 in anesthetized macaque monkeys. In V1, responses to texture boundaries were observed for only the 40 ms frame duration and were independent of the orientation of the texture boundary. However, in transient V2 neurons, responses to such texture boundaries were robust for both frame durations and were stronger for boundaries parallel to the neuron's preferred orientation. The dependence of these processes on stimulus duration and orientation indicates that responses to texture boundaries in V2 arise independently of contextual modulations in V1. In addition, because the responses in transient V2 neurons are sensitive to the orientation of the texture boundary but those of V1 neurons are not, we suggest that V2 responses are the correlate of texture boundary detection, whereas contextual modulation in V1 serves other purposes, possibly related to orientation “pop-out.” PMID:24599456
Evaluation of sequential images for photogrammetrically point determination
NASA Astrophysics Data System (ADS)
Kowalczyk, M.
2011-12-01
Close range photogrammetry encounters many problems with reconstruction of objects three-dimensional shape. Relative orientation parameters of taken photos makes usually key role leading to right solution of this problem. Automation of technology process is hardly performed due to recorded scene complexity and configuration of camera positions. This configuration makes the process of joining photos into one set usually impossible automatically. Application of camcorder is the solution widely proposed in literature for support in 3D models creation. Main advantages of this tool are connected with large number of recorded images and camera positions. Exterior orientation changes barely between two neighboring frames. Those features of film sequence gives possibilities for creating models with basic algorithms, working faster and more robust, than with remotely taken photos. The first part of this paper presents results of experiments determining interior orientation parameters of some sets of frames, presenting three-dimensional test field. This section describes calibration repeatability of film frames taken from camcorder. It is important due to stability of interior camera geometric parameters. Parametric model of systematical errors was applied for correcting images. Afterwards a short film of the same test field had been taken for determination of check points group. This part has been done for controlling purposes of camera application in measurement tasks. Finally there are presented some results of experiments which compare determination of recorded object points in 3D space. In common digital photogrammetry, where separate photos are used, first levels of image pyramids are taken to connect with feature based matching. This complicated process creates a lot of emergencies, which can produce false detections of image similarities. In case of digital film camera, authors of publications avoid this dangerous step, going straightly to area based matching, aiming high degree of similarity for two corresponding film frames. First approximation, in establishing connections between photos, comes from whole image distance. This image distance method can work with more than just two dimensions of translation vector. Scale and angles are also used for improving image matching. This operation creates more similar looking frames where corresponding characteristic points lays close to each other. Procedure searching for pairs of points works faster and more accurately, because analyzed areas can be reduced. Another proposed solution comes from image created by adding differences between particular frames, gives more rough results, but works much faster than standard matching.
Fusion of Geophysical Images in the Study of Archaeological Sites
NASA Astrophysics Data System (ADS)
Karamitrou, A. A.; Petrou, M.; Tsokas, G. N.
2011-12-01
This paper presents results from different fusion techniques between geophysical images from different modalities in order to combine them into one image with higher information content than the two original images independently. The resultant image will be useful for the detection and mapping of buried archaeological relics. The examined archaeological area is situated in Kampana site (NE Greece) near the ancient theater of Maronia city. Archaeological excavations revealed an ancient theater, an aristocratic house and the temple of the ancient Greek God Dionysus. Numerous ceramic objects found in the broader area indicated the probability of the existence of buried urban structure. In order to accurately locate and map the latter, geophysical measurements performed with the use of the magnetic method (vertical gradient of the magnetic field) and of the electrical method (apparent resistivity). We performed a semi-stochastic pixel based registration method between the geophysical images in order to fine register them by correcting their local spatial offsets produced by the use of hand held devices. After this procedure we applied to the registered images three different fusion approaches. Image fusion is a relatively new technique that not only allows integration of different information sources, but also takes advantage of the spatial and spectral resolution as well as the orientation characteristics of each image. We have used three different fusion techniques, fusion with mean values, with wavelets by enhancing selected frequency bands and curvelets giving emphasis at specific bands and angles (according the expecting orientation of the relics). In all three cases the fused images gave significantly better results than each of the original geophysical images separately. The comparison of the results of the three different approaches showed that the fusion with the use of curvelets, giving emphasis at the features' orientation, seems to give the best fused image. In the resultant image appear clear linear and ellipsoid features corresponding to potential archaeological relics.
Rosnell, Tomi; Honkavaara, Eija
2012-01-01
The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems’ SOCET SET classical commercial photogrammetric software and another is built using Microsoft®’s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation. PMID:22368479
Rosnell, Tomi; Honkavaara, Eija
2012-01-01
The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems' SOCET SET classical commercial photogrammetric software and another is built using Microsoft(®)'s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation.
The Effect of Experimental Variables on Industrial X-Ray Micro-Computed Sensitivity
NASA Technical Reports Server (NTRS)
Roth, Don J.; Rauser, Richard W.
2014-01-01
A study was performed on the effect of experimental variables on radiographic sensitivity (image quality) in x-ray micro-computed tomography images for a high density thin wall metallic cylinder containing micro-EDM holes. Image quality was evaluated in terms of signal-to-noise ratio, flaw detectability, and feature sharpness. The variables included: day-to-day reproducibility, current, integration time, voltage, filtering, number of frame averages, number of projection views, beam width, effective object radius, binning, orientation of sample, acquisition angle range (180deg to 360deg), and directional versus transmission tube.
Target-locking acquisition with real-time confocal (TARC) microscopy.
Lu, Peter J; Sims, Peter A; Oki, Hidekazu; Macarthur, James B; Weitz, David A
2007-07-09
We present a real-time target-locking confocal microscope that follows an object moving along an arbitrary path, even as it simultaneously changes its shape, size and orientation. This Target-locking Acquisition with Realtime Confocal (TARC) microscopy system integrates fast image processing and rapid image acquisition using a Nipkow spinning-disk confocal microscope. The system acquires a 3D stack of images, performs a full structural analysis to locate a feature of interest, moves the sample in response, and then collects the next 3D image stack. In this way, data collection is dynamically adjusted to keep a moving object centered in the field of view. We demonstrate the system's capabilities by target-locking freely-diffusing clusters of attractive colloidal particles, and activelytransported quantum dots (QDs) endocytosed into live cells free to move in three dimensions, for several hours. During this time, both the colloidal clusters and live cells move distances several times the length of the imaging volume.
Extraction of Extended Small-Scale Objects in Digital Images
NASA Astrophysics Data System (ADS)
Volkov, V. Y.
2015-05-01
Detection and localization problem of extended small-scale objects with different shapes appears in radio observation systems which use SAR, infra-red, lidar and television camera. Intensive non-stationary background is the main difficulty for processing. Other challenge is low quality of images, blobs, blurred boundaries; in addition SAR images suffer from a serious intrinsic speckle noise. Statistics of background is not normal, it has evident skewness and heavy tails in probability density, so it is hard to identify it. The problem of extraction small-scale objects is solved here on the basis of directional filtering, adaptive thresholding and morthological analysis. New kind of masks is used which are open-ended at one side so it is possible to extract ends of line segments with unknown length. An advanced method of dynamical adaptive threshold setting is investigated which is based on isolated fragments extraction after thresholding. Hierarchy of isolated fragments on binary image is proposed for the analysis of segmentation results. It includes small-scale objects with different shape, size and orientation. The method uses extraction of isolated fragments in binary image and counting points in these fragments. Number of points in extracted fragments is normalized to the total number of points for given threshold and is used as effectiveness of extraction for these fragments. New method for adaptive threshold setting and control maximises effectiveness of extraction. It has optimality properties for objects extraction in normal noise field and shows effective results for real SAR images.
Visualization and manipulating the image of a formal data structure (FDS)-based database
NASA Astrophysics Data System (ADS)
Verdiesen, Franc; de Hoop, Sylvia; Molenaar, Martien
1994-08-01
A vector map is a terrain representation with a vector-structured geometry. Molenaar formulated an object-oriented formal data structure for 3D single valued vector maps. This FDS is implemented in a database (Oracle). In this study we describe a methodology for visualizing a FDS-based database and manipulating the image. A data set retrieved by querying the database is converted into an import file for a drawing application. An objective of this study is that an end-user can alter and add terrain objects in the image. The drawing application creates an export file, that is compared with the import file. Differences between these files result in updating the database which involves checks on consistency. In this study Autocad is used for visualizing and manipulating the image of the data set. A computer program has been written for the data exchange and conversion between Oracle and Autocad. The data structure of the FDS is compared to the data structure of Autocad and the data of the FDS is converted into the structure of Autocad equal to the FDS.
Determining the orientation of depth-rotated familiar objects.
Niimi, Ryosuke; Yokosawa, Kazuhiko
2008-02-01
How does the human visual system determine the depth-orientation of familiar objects? We examined reaction times and errors in the detection of 15 degrees differences in the depth orientations of two simultaneously presented familiar objects, which were the same objects (Experiment 1) or different objects (Experiment 2). Detection of orientation differences was best for 0 degrees (front) and 180 degrees (back), while 45 degrees and 135 degrees yielded poorer results, and 90 degrees (side) showed intermediate results, suggesting that the visual system is tuned for front, side and back orientations. We further found that those advantages are due to orientation-specific features such as horizontal linear contours and symmetry, since the 90 degrees advantage was absent for objects with curvilinear contours, and asymmetric object diminished the 0 degrees and 180 degrees advantages. We conclude that the efficiency of visually determining object orientation is highly orientation-dependent, and object orientation may be perceived in favor of front-back axes.
Experimental Influences in the Accurate Measurement of Cartilage Thickness in MRI.
Wang, Nian; Badar, Farid; Xia, Yang
2018-01-01
Objective To study the experimental influences to the measurement of cartilage thickness by magnetic resonance imaging (MRI). Design The complete thicknesses of healthy and trypsin-degraded cartilage were measured at high-resolution MRI under different conditions, using two intensity-based imaging sequences (ultra-short echo [UTE] and multislice-multiecho [MSME]) and 3 quantitative relaxation imaging sequences (T 1 , T 2 , and T 1 ρ). Other variables included different orientations in the magnet, 2 soaking solutions (saline and phosphate buffered saline [PBS]), and external loading. Results With cartilage soaked in saline, UTE and T 1 methods yielded complete and consistent measurement of cartilage thickness, while the thickness measurement by T 2 , T 1 ρ, and MSME methods were orientation dependent. The effect of external loading on cartilage thickness is also sequence and orientation dependent. All variations in cartilage thickness in MRI could be eliminated with the use of a 100 mM PBS or imaged by UTE sequence. Conclusions The appearance of articular cartilage and the measurement accuracy of cartilage thickness in MRI can be influenced by a number of experimental factors in ex vivo MRI, from the use of various pulse sequences and soaking solutions to the health of the tissue. T 2 -based imaging sequence, both proton-intensity sequence and quantitative relaxation sequence, similarly produced the largest variations. With adequate resolution, the accurate measurement of whole cartilage tissue in clinical MRI could be utilized to detect differences between healthy and osteoarthritic cartilage after compression.
Near-real-time biplanar fluoroscopic tracking system for the video tumor fighter
NASA Astrophysics Data System (ADS)
Lawson, Michael A.; Wika, Kevin G.; Gilles, George T.; Ritter, Rogers C.
1991-06-01
We have developed software capable of the three-dimensional tracking of objects in the brain volume, and the subsequent overlaying of an image of the object onto previously obtained MR or CT scans. This software has been developed for use with the Magnetic Stereotaxis System (MSS), also called the 'Video Tumor Fighter' (VTF). The software was written for a Sun 4/110 SPARC workstation with an ANDROX ICS-400 image processing card installed to manage this task. At present, the system uses input from two orthogonally-oriented, visible- light cameras and a simulated scene to determine the three-dimensional position of the object of interest. The coordinates are then transformed into MR or CT coordinates and an image of the object is displayed in the appropriate intersecting MR slice on a computer screen. This paper describes the tracking algorithm and discusses how it was implemented in software. The system's hardware is also described. The limitations of the present system are discussed and plans for incorporating bi-planar, x-ray fluoroscopy are presented.
Zheng, Guoyan; Zhang, Xuan; Steppacher, Simon D; Murphy, Stephen B; Siebenrock, Klaus A; Tannast, Moritz
2009-09-01
The widely used procedure of evaluation of cup orientation following total hip arthroplasty using single standard anteroposterior (AP) radiograph is known inaccurate, largely due to the wide variability in individual pelvic orientation relative to X-ray plate. 2D-3D image registration methods have been introduced for an accurate determination of the post-operative cup alignment with respect to an anatomical reference extracted from the CT data. Although encouraging results have been reported, their extensive usage in clinical routine is still limited. This may be explained by their requirement of a CAD model of the prosthesis, which is often difficult to be organized from the manufacturer due to the proprietary issue, and by their requirement of either multiple radiographs or a radiograph-specific calibration, both of which are not available for most retrospective studies. To address these issues, we developed and validated an object-oriented cross-platform program called "HipMatch" where a hybrid 2D-3D registration scheme combining an iterative landmark-to-ray registration with a 2D-3D intensity-based registration was implemented to estimate a rigid transformation between a pre-operative CT volume and the post-operative X-ray radiograph for a precise estimation of cup alignment. No CAD model of the prosthesis is required. Quantitative and qualitative results evaluated on cadaveric and clinical datasets are given, which indicate the robustness and the accuracy of the program. HipMatch is written in object-oriented programming language C++ using cross-platform software Qt (TrollTech, Oslo, Norway), VTK, and Coin3D and is transportable to any platform.
Tensor scale-based fuzzy connectedness image segmentation
NASA Astrophysics Data System (ADS)
Saha, Punam K.; Udupa, Jayaram K.
2003-05-01
Tangible solutions to image segmentation are vital in many medical imaging applications. Toward this goal, a framework based on fuzzy connectedness was developed in our laboratory. A fundamental notion called "affinity" - a local fuzzy hanging togetherness relation on voxels - determines the effectiveness of this segmentation framework in real applications. In this paper, we introduce the notion of "tensor scale" - a recently developed local morphometric parameter - in affinity definition and study its effectiveness. Although, our previous notion of "local scale" using the spherical model successfully incorporated local structure size into affinity and resulted in measureable improvements in segmentation results, a major limitation of the previous approach was that it ignored local structural orientation and anisotropy. The current approach of using tensor scale in affinity computation allows an effective utilization of local size, orientation, and ansiotropy in a unified manner. Tensor scale is used for computing both the homogeneity- and object-feature-based components of affinity. Preliminary results of the proposed method on several medical images and computer generated phantoms of realistic shapes are presented. Further extensions of this work are discussed.
Intraoperative virtual brain counseling
NASA Astrophysics Data System (ADS)
Jiang, Zhaowei; Grosky, William I.; Zamorano, Lucia J.; Muzik, Otto; Diaz, Fernando
1997-06-01
Our objective is to offer online real-tim e intelligent guidance to the neurosurgeon. Different from traditional image-guidance technologies that offer intra-operative visualization of medical images or atlas images, virtual brain counseling goes one step further. It can distinguish related brain structures and provide information about them intra-operatively. Virtual brain counseling is the foundation for surgical planing optimization and on-line surgical reference. It can provide a warning system that alerts the neurosurgeon if the chosen trajectory will pass through eloquent brain areas. In order to fulfill this objective, tracking techniques are involved for intra- operativity. Most importantly, a 3D virtual brian environment, different from traditional 3D digitized atlases, is an object-oriented model of the brain that stores information about different brain structures together with their elated information. An object-oriented hierarchical hyper-voxel space (HHVS) is introduced to integrate anatomical and functional structures. Spatial queries based on position of interest, line segment of interest, and volume of interest are introduced in this paper. The virtual brain environment is integrated with existing surgical pre-planning and intra-operative tracking systems to provide information for planning optimization and on-line surgical guidance. The neurosurgeon is alerted automatically if the planned treatment affects any critical structures. Architectures such as HHVS and algorithms, such as spatial querying, normalizing, and warping are presented in the paper. A prototype has shown that the virtual brain is intuitive in its hierarchical 3D appearance. It also showed that HHVS, as the key structure for virtual brain counseling, efficiently integrates multi-scale brain structures based on their spatial relationships.This is a promising development for optimization of treatment plans and online surgical intelligent guidance.
Linear feature detection algorithm for astronomical surveys - I. Algorithm description
NASA Astrophysics Data System (ADS)
Bektešević, Dino; Vinković, Dejan
2017-11-01
Computer vision algorithms are powerful tools in astronomical image analyses, especially when automation of object detection and extraction is required. Modern object detection algorithms in astronomy are oriented towards detection of stars and galaxies, ignoring completely the detection of existing linear features. With the emergence of wide-field sky surveys, linear features attract scientific interest as possible trails of fast flybys of near-Earth asteroids and meteors. In this work, we describe a new linear feature detection algorithm designed specifically for implementation in big data astronomy. The algorithm combines a series of algorithmic steps that first remove other objects (stars and galaxies) from the image and then enhance the line to enable more efficient line detection with the Hough algorithm. The rate of false positives is greatly reduced thanks to a step that replaces possible line segments with rectangles and then compares lines fitted to the rectangles with the lines obtained directly from the image. The speed of the algorithm and its applicability in astronomical surveys are also discussed.
NASA Technical Reports Server (NTRS)
Alvertos, Nicolas; Dcunha, Ivan
1992-01-01
A feature set of two dimensional curves is obtained after intersecting symmetric objects like spheres, cones, cylinders, ellipsoids, paraboloids, and parallelepipeds with two planes. After determining the location and orientation of the objects in space, these objects are aligned so as to lie on a plane parallel to a suitable coordinate system. These objects are then intersected with a horizontal and a vertical plane. Experiments were carried out with range images of sphere and cylinder. The 3-D discriminant approach was used to recognize quadric surfaces made up of simulated data. Its application to real data was also studied.
NASA Astrophysics Data System (ADS)
Markelin, L.; Honkavaara, E.; Näsi, R.; Nurminen, K.; Hakala, T.
2014-08-01
Remote sensing based on unmanned airborne vehicles (UAVs) is a rapidly developing field of technology. UAVs enable accurate, flexible, low-cost and multiangular measurements of 3D geometric, radiometric, and temporal properties of land and vegetation using various sensors. In this paper we present a geometric processing chain for multiangular measurement system that is designed for measuring object directional reflectance characteristics in a wavelength range of 400-900 nm. The technique is based on a novel, lightweight spectral camera designed for UAV use. The multiangular measurement is conducted by collecting vertical and oblique area-format spectral images. End products of the geometric processing are image exterior orientations, 3D point clouds and digital surface models (DSM). This data is needed for the radiometric processing chain that produces reflectance image mosaics and multiangular bidirectional reflectance factor (BRF) observations. The geometric processing workflow consists of the following three steps: (1) determining approximate image orientations using Visual Structure from Motion (VisualSFM) software, (2) calculating improved orientations and sensor calibration using a method based on self-calibrating bundle block adjustment (standard photogrammetric software) (this step is optional), and finally (3) creating dense 3D point clouds and DSMs using Photogrammetric Surface Reconstruction from Imagery (SURE) software that is based on semi-global-matching algorithm and it is capable of providing a point density corresponding to the pixel size of the image. We have tested the geometric processing workflow over various targets, including test fields, agricultural fields, lakes and complex 3D structures like forests.
Automatic Spatio-Temporal Flow Velocity Measurement in Small Rivers Using Thermal Image Sequences
NASA Astrophysics Data System (ADS)
Lin, D.; Eltner, A.; Sardemann, H.; Maas, H.-G.
2018-05-01
An automatic spatio-temporal flow velocity measurement approach, using an uncooled thermal camera, is proposed in this paper. The basic principle of the method is to track visible thermal features at the water surface in thermal camera image sequences. Radiometric and geometric calibrations are firstly implemented to remove vignetting effects in thermal imagery and to get the interior orientation parameters of the camera. An object-based unsupervised classification approach is then applied to detect the interest regions for data referencing and thermal feature tracking. Subsequently, GCPs are extracted to orient the river image sequences and local hot points are identified as tracking features. Afterwards, accurate dense tracking outputs are obtained using pyramidal Lucas-Kanade method. To validate the accuracy potential of the method, measurements obtained from thermal feature tracking are compared with reference measurements taken by a propeller gauge. Results show a great potential of automatic flow velocity measurement in small rivers using imagery from a thermal camera.
Unsupervised Cryo-EM Data Clustering through Adaptively Constrained K-Means Algorithm
Xu, Yaofang; Wu, Jiayi; Yin, Chang-Cheng; Mao, Youdong
2016-01-01
In single-particle cryo-electron microscopy (cryo-EM), K-means clustering algorithm is widely used in unsupervised 2D classification of projection images of biological macromolecules. 3D ab initio reconstruction requires accurate unsupervised classification in order to separate molecular projections of distinct orientations. Due to background noise in single-particle images and uncertainty of molecular orientations, traditional K-means clustering algorithm may classify images into wrong classes and produce classes with a large variation in membership. Overcoming these limitations requires further development on clustering algorithms for cryo-EM data analysis. We propose a novel unsupervised data clustering method building upon the traditional K-means algorithm. By introducing an adaptive constraint term in the objective function, our algorithm not only avoids a large variation in class sizes but also produces more accurate data clustering. Applications of this approach to both simulated and experimental cryo-EM data demonstrate that our algorithm is a significantly improved alterative to the traditional K-means algorithm in single-particle cryo-EM analysis. PMID:27959895
Unsupervised Cryo-EM Data Clustering through Adaptively Constrained K-Means Algorithm.
Xu, Yaofang; Wu, Jiayi; Yin, Chang-Cheng; Mao, Youdong
2016-01-01
In single-particle cryo-electron microscopy (cryo-EM), K-means clustering algorithm is widely used in unsupervised 2D classification of projection images of biological macromolecules. 3D ab initio reconstruction requires accurate unsupervised classification in order to separate molecular projections of distinct orientations. Due to background noise in single-particle images and uncertainty of molecular orientations, traditional K-means clustering algorithm may classify images into wrong classes and produce classes with a large variation in membership. Overcoming these limitations requires further development on clustering algorithms for cryo-EM data analysis. We propose a novel unsupervised data clustering method building upon the traditional K-means algorithm. By introducing an adaptive constraint term in the objective function, our algorithm not only avoids a large variation in class sizes but also produces more accurate data clustering. Applications of this approach to both simulated and experimental cryo-EM data demonstrate that our algorithm is a significantly improved alterative to the traditional K-means algorithm in single-particle cryo-EM analysis.
Sub-Camera Calibration of a Penta-Camera
NASA Astrophysics Data System (ADS)
Jacobsen, K.; Gerke, M.
2016-03-01
Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test) of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors for corresponding cameras of both blocks have the same trend, but as usual for block adjustments with self calibration, they still show significant differences. Based on the very high number of image points the remaining image residuals can be safely determined by overlaying and averaging the image residuals corresponding to their image coordinates. The size of the systematic image errors, not covered by the used additional parameters, is in the range of a square mean of 0.1 pixels corresponding to 0.6μm. They are not the same for both blocks, but show some similarities for corresponding cameras. In general the bundle block adjustment with a satisfying set of additional parameters, checked by remaining systematic errors, is required for use of the whole geometric potential of the penta camera. Especially for object points on facades, often only in two images and taken with a limited base length, the correct handling of systematic image errors is important. At least in the analyzed data sets the self calibration of sub-cameras by bundle block adjustment suffers from the correlation of the inner to the exterior calibration due to missing crossing flight directions. As usual, the systematic image errors differ from block to block even without the influence of the correlation to the exterior orientation.
Lee, Bang Yeon; Kang, Su-Tae; Yun, Hae-Bum; Kim, Yun Yong
2016-01-12
The distribution of fiber orientation is an important factor in determining the mechanical properties of fiber-reinforced concrete. This study proposes a new image analysis technique for improving the evaluation accuracy of fiber orientation distribution in the sectional image of fiber-reinforced concrete. A series of tests on the accuracy of fiber detection and the estimation performance of fiber orientation was performed on artificial fiber images to assess the validity of the proposed technique. The validation test results showed that the proposed technique estimates the distribution of fiber orientation more accurately than the direct measurement of fiber orientation by image analysis.
Lee, Bang Yeon; Kang, Su-Tae; Yun, Hae-Bum; Kim, Yun Yong
2016-01-01
The distribution of fiber orientation is an important factor in determining the mechanical properties of fiber-reinforced concrete. This study proposes a new image analysis technique for improving the evaluation accuracy of fiber orientation distribution in the sectional image of fiber-reinforced concrete. A series of tests on the accuracy of fiber detection and the estimation performance of fiber orientation was performed on artificial fiber images to assess the validity of the proposed technique. The validation test results showed that the proposed technique estimates the distribution of fiber orientation more accurately than the direct measurement of fiber orientation by image analysis. PMID:28787839
Pokhrel, Damodar; Murphy, Martin J; Todor, Dorin A; Weiss, Elisabeth; Williamson, Jeffrey F
2011-02-01
To present a novel method for reconstructing the 3D pose (position and orientation) of radio-opaque applicators of known but arbitrary shape from a small set of 2D x-ray projections in support of intraoperative brachytherapy planning. The generalized iterative forward projection matching (gIFPM) algorithm finds the six degree-of-freedom pose of an arbitrary rigid object by minimizing the sum-of-squared-intensity differences (SSQD) between the computed and experimentally acquired autosegmented projection of the objects. Starting with an initial estimate of the object's pose, gIFPM iteratively refines the pose parameters (3D position and three Euler angles) until the SSQD converges. The object, here specialized to a Fletcher-Weeks intracavitary brachytherapy (ICB) applicator, is represented by a fine mesh of discrete points derived from complex combinatorial geometric models of the actual applicators. Three pairs of computed and measured projection images with known imaging geometry are used. Projection images of an intrauterine tandem and colpostats were acquired from an ACUITY cone-beam CT digital simulator. An image postprocessing step was performed to create blurred binary applicators only images. To quantify gIFPM accuracy, the reconstructed 3D pose of the applicator model was forward projected and overlaid with the measured images and empirically calculated the nearest-neighbor applicator positional difference for each image pair. In the numerical simulations, the tandem and colpostats positions (x,y,z) and orientations (alpha, beta, gamma) were estimated with accuracies of 0.6 mm and 2 degrees, respectively. For experimentally acquired images of actual applicators, the residual 2D registration error was less than 1.8 mm for each image pair, corresponding to about 1 mm positioning accuracy at isocenter, with a total computation time of less than 1.5 min on a 1 GHz processor. This work describes a novel, accurate, fast, and completely automatic method to localize radio-opaque applicators of arbitrary shape from measured 2D x-ray projections. The results demonstrate approximately 1 mm accuracy while compared against the measured applicator projections. No lateral film is needed. By localizing the applicator internal structure as well as radioactive sources, the effect of intra-applicator and interapplicator attenuation can be included in the resultant dose calculations. Further validation tests using clinically acquired tandem and colpostats images will be performed for the accurate and robust applicator/sources localization in ICB patients.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pokhrel, Damodar; Murphy, Martin J.; Todor, Dorin A.
2011-02-15
Purpose: To present a novel method for reconstructing the 3D pose (position and orientation) of radio-opaque applicators of known but arbitrary shape from a small set of 2D x-ray projections in support of intraoperative brachytherapy planning. Methods: The generalized iterative forward projection matching (gIFPM) algorithm finds the six degree-of-freedom pose of an arbitrary rigid object by minimizing the sum-of-squared-intensity differences (SSQD) between the computed and experimentally acquired autosegmented projection of the objects. Starting with an initial estimate of the object's pose, gIFPM iteratively refines the pose parameters (3D position and three Euler angles) until the SSQD converges. The object, heremore » specialized to a Fletcher-Weeks intracavitary brachytherapy (ICB) applicator, is represented by a fine mesh of discrete points derived from complex combinatorial geometric models of the actual applicators. Three pairs of computed and measured projection images with known imaging geometry are used. Projection images of an intrauterine tandem and colpostats were acquired from an ACUITY cone-beam CT digital simulator. An image postprocessing step was performed to create blurred binary applicators only images. To quantify gIFPM accuracy, the reconstructed 3D pose of the applicator model was forward projected and overlaid with the measured images and empirically calculated the nearest-neighbor applicator positional difference for each image pair. Results: In the numerical simulations, the tandem and colpostats positions (x,y,z) and orientations ({alpha},{beta},{gamma}) were estimated with accuracies of 0.6 mm and 2 deg., respectively. For experimentally acquired images of actual applicators, the residual 2D registration error was less than 1.8 mm for each image pair, corresponding to about 1 mm positioning accuracy at isocenter, with a total computation time of less than 1.5 min on a 1 GHz processor. Conclusions: This work describes a novel, accurate, fast, and completely automatic method to localize radio-opaque applicators of arbitrary shape from measured 2D x-ray projections. The results demonstrate {approx}1 mm accuracy while compared against the measured applicator projections. No lateral film is needed. By localizing the applicator internal structure as well as radioactive sources, the effect of intra-applicator and interapplicator attenuation can be included in the resultant dose calculations. Further validation tests using clinically acquired tandem and colpostats images will be performed for the accurate and robust applicator/sources localization in ICB patients.« less
Developing operation algorithms for vision subsystems in autonomous mobile robots
NASA Astrophysics Data System (ADS)
Shikhman, M. V.; Shidlovskiy, S. V.
2018-05-01
The paper analyzes algorithms for selecting keypoints on the image for the subsequent automatic detection of people and obstacles. The algorithm is based on the histogram of oriented gradients and the support vector method. The combination of these methods allows successful selection of dynamic and static objects. The algorithm can be applied in various autonomous mobile robots.
Image-guided laser projection for port placement in minimally invasive surgery.
Marmurek, Jonathan; Wedlake, Chris; Pardasani, Utsav; Eagleson, Roy; Peters, Terry
2006-01-01
We present an application of an augmented reality laser projection system in which procedure-specific optimal incision sites, computed from pre-operative image acquisition, are superimposed on a patient to guide port placement in minimally invasive surgery. Tests were conducted to evaluate the fidelity of computed and measured port configurations, and to validate the accuracy with which a surgical tool-tip can be placed at an identified virtual target. A high resolution volumetric image of a thorax phantom was acquired using helical computed tomography imaging. Oriented within the thorax, a phantom organ with marked targets was visualized in a virtual environment. A graphical interface enabled marking the locations of target anatomy, and calculation of a grid of potential port locations along the intercostal rib lines. Optimal configurations of port positions and tool orientations were determined by an objective measure reflecting image-based indices of surgical dexterity, hand-eye alignment, and collision detection. Intra-operative registration of the computed virtual model and the phantom anatomy was performed using an optical tracking system. Initial trials demonstrated that computed and projected port placement provided direct access to target anatomy with an accuracy of 2 mm.
Monovision techniques for telerobots
NASA Technical Reports Server (NTRS)
Goode, P. W.; Carnils, K.
1987-01-01
The primary task of the vision sensor in a telerobotic system is to provide information about the position of the system's effector relative to objects of interest in its environment. The subtasks required to perform the primary task include image segmentation, object recognition, and object location and orientation in some coordinate system. The accomplishment of the vision task requires the appropriate processing tools and the system methodology to effectively apply the tools to the subtasks. The functional structure of the telerobotic vision system used in the Langley Research Center's Intelligent Systems Research Laboratory is discussed as well as two monovision techniques for accomplishing the vision subtasks.
In-situ measurement of objective lens data of a high-resolution electron microscope.
NASA Technical Reports Server (NTRS)
Heinemann, K.
1971-01-01
Bragg-reflex images of small individual crystallites in the size range of 20-100 A diameter with known crystallographic orientation were used in a transmission electron microscope to determine in-situ: (a) the relationship between objective lens current (or accelerating voltage) changes in discrete steps and corresponding defocus, (b) the spherical aberration coefficient, and (c) the axial chromatic aberration coefficient of the objective lens. The accuracy of the described method is better than 5%. The same specimen can advantageously be used to properly aline the illuminating beam with respect to the optical axis.
General object-oriented software development
NASA Technical Reports Server (NTRS)
Seidewitz, Edwin V.; Stark, Mike
1986-01-01
Object-oriented design techniques are gaining increasing popularity for use with the Ada programming language. A general approach to object-oriented design which synthesizes the principles of previous object-oriented methods into the overall software life-cycle, providing transitions from specification to design and from design to code. It therefore provides the basis for a general object-oriented development methodology.
An Imaging And Graphics Workstation For Image Sequence Analysis
NASA Astrophysics Data System (ADS)
Mostafavi, Hassan
1990-01-01
This paper describes an application-specific engineering workstation designed and developed to analyze imagery sequences from a variety of sources. The system combines the software and hardware environment of the modern graphic-oriented workstations with the digital image acquisition, processing and display techniques. The objective is to achieve automation and high throughput for many data reduction tasks involving metric studies of image sequences. The applications of such an automated data reduction tool include analysis of the trajectory and attitude of aircraft, missile, stores and other flying objects in various flight regimes including launch and separation as well as regular flight maneuvers. The workstation can also be used in an on-line or off-line mode to study three-dimensional motion of aircraft models in simulated flight conditions such as wind tunnels. The system's key features are: 1) Acquisition and storage of image sequences by digitizing real-time video or frames from a film strip; 2) computer-controlled movie loop playback, slow motion and freeze frame display combined with digital image sharpening, noise reduction, contrast enhancement and interactive image magnification; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored image sequence; 4) automatic and manual field-of-view and spatial calibration; 5) image sequence data base generation and management, including the measurement data products; 6) off-line analysis software for trajectory plotting and statistical analysis; 7) model-based estimation and tracking of object attitude angles; and 8) interface to a variety of video players and film transport sub-systems.
NASA Astrophysics Data System (ADS)
Khodaverdi zahraee, N.; Rastiveis, H.
2017-09-01
Earthquake is one of the most divesting natural events that threaten human life during history. After the earthquake, having information about the damaged area, the amount and type of damage can be a great help in the relief and reconstruction for disaster managers. It is very important that these measures should be taken immediately after the earthquake because any negligence could be more criminal losses. The purpose of this paper is to propose and implement an automatic approach for mapping destructed buildings after an earthquake using pre- and post-event high resolution satellite images. In the proposed method after preprocessing, segmentation of both images is performed using multi-resolution segmentation technique. Then, the segmentation results are intersected with ArcGIS to obtain equal image objects on both images. After that, appropriate textural features, which make a better difference between changed or unchanged areas, are calculated for all the image objects. Finally, subtracting the extracted textural features from pre- and post-event images, obtained values are applied as an input feature vector in an artificial neural network for classifying the area into two classes of changed and unchanged areas. The proposed method was evaluated using WorldView2 satellite images, acquired before and after the 2010 Haiti earthquake. The reported overall accuracy of 93% proved the ability of the proposed method for post-earthquake buildings change detection.
[A computer-aided image diagnosis and study system].
Li, Zhangyong; Xie, Zhengxiang
2004-08-01
The revolution in information processing, particularly the digitizing of medicine, has changed the medical study, work and management. This paper reports a method to design a system for computer-aided image diagnosis and study. Combined with some good idea of graph-text system and picture archives communicate system (PACS), the system was realized and used for "prescription through computer", "managing images" and "reading images under computer and helping the diagnosis". Also typical examples were constructed in a database and used to teach the beginners. The system was developed by the visual developing tools based on object oriented programming (OOP) and was carried into operation on the Windows 9X platform. The system possesses friendly man-machine interface.
NASA Astrophysics Data System (ADS)
Li, X.; Li, S. W.
2012-07-01
In this paper, an efficient global optimization algorithm in the field of artificial intelligence, named Particle Swarm Optimization (PSO), is introduced into close range photogrammetric data processing. PSO can be applied to obtain the approximate values of exterior orientation elements under the condition that multi-intersection photography and a small portable plane control frame are used. PSO, put forward by an American social psychologist J. Kennedy and an electrical engineer R.C. Eberhart, is a stochastic global optimization method based on swarm intelligence, which was inspired by social behavior of bird flocking or fish schooling. The strategy of obtaining the approximate values of exterior orientation elements using PSO is as follows: in terms of image coordinate observed values and space coordinates of few control points, the equations of calculating the image coordinate residual errors can be given. The sum of absolute value of each image coordinate is minimized to be the objective function. The difference between image coordinate observed value and the image coordinate computed through collinear condition equation is defined as the image coordinate residual error. Firstly a gross area of exterior orientation elements is given, and then the adjustment of other parameters is made to get the particles fly in the gross area. After iterative computation for certain times, the satisfied approximate values of exterior orientation elements are obtained. By doing so, the procedures like positioning and measuring space control points in close range photogrammetry can be avoided. Obviously, this method can improve the surveying efficiency greatly and at the same time can decrease the surveying cost. And during such a process, only one small portable control frame with a couple of control points is employed, and there are no strict requirements for the space distribution of control points. In order to verify the effectiveness of this algorithm, two experiments are carried out. In the first experiment, images of a standard grid board are taken according to multi-intersection photography using digital camera. Three points or six points which are located on the left-down corner of the standard grid are regarded as control points respectively, and the exterior orientation elements of each image are computed through PSO, and compared with these elements computed through bundle adjustment. In the second experiment, the exterior orientation elements obtained from the first experiment are used as approximate values in bundle adjustment and then the space coordinates of other grid points on the board can be computed. The coordinate difference of grid points between these computed space coordinates and their known coordinates can be used to compute the accuracy. The point accuracy computed in above experiments are ±0.76mm and ±0.43mm respectively. The above experiments prove the effectiveness of PSO used in close range photogrammetry to compute approximate values of exterior orientation elements, and the algorithm can meet the requirement of higher accuracy. In short, PSO can get better results in a faster, cheaper way compared with other surveying methods in close range photogrammetry.
Quantification of resolution in multiplanar reconstructions for digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Vent, Trevor L.; Acciavatti, Raymond J.; Kwon, Young Joon; Maidment, Andrew D. A.
2016-03-01
Multiplanar reconstruction (MPR) in digital breast tomosynthesis (DBT) allows tomographic images to be portrayed in various orientations. We have conducted research to determine the resolution of tomosynthesis MPR. We built a phantom that houses a star test pattern to measure resolution. This phantom provides three rotational degrees of freedom. The design consists of two hemispheres with longitudinal and latitudinal grooves that reference angular increments. When joined together, the hemispheres form a dome that sits inside a cylindrical encasement. The cylindrical encasement contains reference notches to match the longitudinal and latitudinal grooves that guide the phantom's rotations. With this design, any orientation of the star-pattern can be analyzed. Images of the star-pattern were acquired using a DBT mammography system at the Hospital of the University of Pennsylvania. Images taken were reconstructed and analyzed by two different methods. First, the maximum visible frequency (in line pairs per millimeter) of the star test pattern was measured. Then, the contrast was calculated at a fixed spatial frequency. These analyses confirm that resolution decreases with tilt relative to the breast support. They also confirm that resolution in tomosynthesis MPR is dependent on object orientation. Current results verify that the existence of super-resolution depends on the orientation of the frequency; the direction parallel to x-ray tube motion shows super-resolution. In conclusion, this study demonstrates that the direction of the spatial frequency relative to the motion of the x-ray tube is a determinant of resolution in MPR for DBT.
Visual Search for Object Orientation Can Be Modulated by Canonical Orientation
ERIC Educational Resources Information Center
Ballaz, Cecile; Boutsen, Luc; Peyrin, Carole; Humphreys, Glyn W.; Marendaz, Christian
2005-01-01
The authors studied the influence of canonical orientation on visual search for object orientation. Displays consisted of pictures of animals whose axis of elongation was either vertical or tilted in their canonical orientation. Target orientation could be either congruent or incongruent with the object's canonical orientation. In Experiment 1,…
Rasmussen, Peter M.; Smith, Amy F.; Sakadžić, Sava; Boas, David A.; Pries, Axel R.; Secomb, Timothy W.; Østergaard, Leif
2017-01-01
Objective In vivo imaging of the microcirculation and network-oriented modeling have emerged as powerful means of studying microvascular function and understanding its physiological significance. Network-oriented modeling may provide the means of summarizing vast amounts of data produced by high-throughput imaging techniques in terms of key, physiological indices. To estimate such indices with sufficient certainty, however, network-oriented analysis must be robust to the inevitable presence of uncertainty due to measurement errors as well as model errors. Methods We propose the Bayesian probabilistic data analysis framework as a means of integrating experimental measurements and network model simulations into a combined and statistically coherent analysis. The framework naturally handles noisy measurements and provides posterior distributions of model parameters as well as physiological indices associated with uncertainty. Results We applied the analysis framework to experimental data from three rat mesentery networks and one mouse brain cortex network. We inferred distributions for more than five hundred unknown pressure and hematocrit boundary conditions. Model predictions were consistent with previous analyses, and remained robust when measurements were omitted from model calibration. Conclusion Our Bayesian probabilistic approach may be suitable for optimizing data acquisition and for analyzing and reporting large datasets acquired as part of microvascular imaging studies. PMID:27987383
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sels, Seppe, E-mail: Seppe.Sels@uantwerpen.be; Ribbens, Bart; Mertens, Luc
Scanning laser Doppler vibrometers (LDV) are used to measure full-field vibration shapes of products and structures. In most commercially available scanning laser Doppler vibrometer systems the user manually draws a grid of measurement locations on a 2D camera image of the product. The determination of the correct physical measurement locations can be a time consuming and diffcult task. In this paper we present a new methodology for product testing and quality control that integrates 3D imaging techniques with vibration measurements. This procedure allows to test prototypes in a shorter period because physical measurements locations will be located automatically. The proposedmore » methodology uses a 3D time-of-flight camera to measure the location and orientation of the test-object. The 3D image of the time-of-flight camera is then matched with the 3D-CAD model of the object in which measurement locations are pre-defined. A time of flight camera operates strictly in the near infrared spectrum. To improve the signal to noise ratio in the time-of-flight measurement, a time-of-flight camera uses a band filter. As a result of this filter, the laser spot of most laser vibrometers is invisible in the time-of-flight image. Therefore a 2D RGB-camera is used to find the laser-spot of the vibrometer. The laser spot is matched to the 3D image obtained by the time-of-flight camera. Next an automatic calibration procedure is used to aim the laser at the (pre)defined locations. Another benefit from this methodology is that it incorporates automatic mapping between a CAD model and the vibration measurements. This mapping can be used to visualize measurements directly on a 3D CAD model. Secondly the orientation of the CAD model is known with respect to the laser beam. This information can be used to find the direction of the measured vibration relatively to the surface of the object. With this direction, the vibration measurements can be compared more precisely with numerical experiments.« less
Wave optics of the central spot in planetary occultations
NASA Technical Reports Server (NTRS)
Hubbard, W. B.
1977-01-01
The detection of a bright central spot during the occultation of epsilon Geminorum by Mars demonstrates that an exponentially-stratified planetary atmosphere can act as a lens providing very high resolution of distant objects (e.g., quasars, white dwarfs, and pulsars). The diffraction nature of the central occultation spot is investigated, with special reference to Mars and Venus. In practice, however, central occultations by these planets are seldom observable from the earth's surface, and spacecraft would have to be used to obtain a suitable orientation for observers. Further difficulties may be encountered in image deconvolution needed for extended objects, in location of the image of a true point source, and in compensation for peculiarities of planets and their atmospheres.
A neural network approach for image reconstruction in electron magnetic resonance tomography.
Durairaj, D Christopher; Krishna, Murali C; Murugesan, Ramachandran
2007-10-01
An object-oriented, artificial neural network (ANN) based, application system for reconstruction of two-dimensional spatial images in electron magnetic resonance (EMR) tomography is presented. The standard back propagation algorithm is utilized to train a three-layer sigmoidal feed-forward, supervised, ANN to perform the image reconstruction. The network learns the relationship between the 'ideal' images that are reconstructed using filtered back projection (FBP) technique and the corresponding projection data (sinograms). The input layer of the network is provided with a training set that contains projection data from various phantoms as well as in vivo objects, acquired from an EMR imager. Twenty five different network configurations are investigated to test the ability of the generalization of the network. The trained ANN then reconstructs two-dimensional temporal spatial images that present the distribution of free radicals in biological systems. Image reconstruction by the trained neural network shows better time complexity than the conventional iterative reconstruction algorithms such as multiplicative algebraic reconstruction technique (MART). The network is further explored for image reconstruction from 'noisy' EMR data and the results show better performance than the FBP method. The network is also tested for its ability to reconstruct from limited-angle EMR data set.
User-oriented evaluation of a medical image retrieval system for radiologists.
Markonis, Dimitrios; Holzer, Markus; Baroz, Frederic; De Castaneda, Rafael Luis Ruiz; Boyer, Célia; Langs, Georg; Müller, Henning
2015-10-01
This article reports the user-oriented evaluation of a text- and content-based medical image retrieval system. User tests with radiologists using a search system for images in the medical literature are presented. The goal of the tests is to assess the usability of the system, identify system and interface aspects that need improvement and useful additions. Another objective is to investigate the system's added value to radiology information retrieval. The study provides an insight into required specifications and potential shortcomings of medical image retrieval systems through a concrete methodology for conducting user tests. User tests with a working image retrieval system of images from the biomedical literature were performed in an iterative manner, where each iteration had the participants perform radiology information seeking tasks and then refining the system as well as the user study design itself. During these tasks the interaction of the users with the system was monitored, usability aspects were measured, retrieval success rates recorded and feedback was collected through survey forms. In total, 16 radiologists participated in the user tests. The success rates in finding relevant information were on average 87% and 78% for image and case retrieval tasks, respectively. The average time for a successful search was below 3 min in both cases. Users felt quickly comfortable with the novel techniques and tools (after 5 to 15 min), such as content-based image retrieval and relevance feedback. User satisfaction measures show a very positive attitude toward the system's functionalities while the user feedback helped identifying the system's weak points. The participants proposed several potentially useful new functionalities, such as filtering by imaging modality and search for articles using image examples. The iterative character of the evaluation helped to obtain diverse and detailed feedback on all system aspects. Radiologists are quickly familiar with the functionalities but have several comments on desired functionalities. The analysis of the results can potentially assist system refinement for future medical information retrieval systems. Moreover, the methodology presented as well as the discussion on the limitations and challenges of such studies can be useful for user-oriented medical image retrieval evaluation, as user-oriented evaluation of interactive system is still only rarely performed. Such interactive evaluations can be limited in effort if done iteratively and can give many insights for developing better systems. Copyright © 2015. Published by Elsevier Ireland Ltd.
NASA Astrophysics Data System (ADS)
El Bekri, Nadia; Angele, Susanne; Ruckhäberle, Martin; Peinsipp-Byma, Elisabeth; Haelke, Bruno
2015-10-01
This paper introduces an interactive recognition assistance system for imaging reconnaissance. This system supports aerial image analysts on missions during two main tasks: Object recognition and infrastructure analysis. Object recognition concentrates on the classification of one single object. Infrastructure analysis deals with the description of the components of an infrastructure and the recognition of the infrastructure type (e.g. military airfield). Based on satellite or aerial images, aerial image analysts are able to extract single object features and thereby recognize different object types. It is one of the most challenging tasks in the imaging reconnaissance. Currently, there are no high potential ATR (automatic target recognition) applications available, as consequence the human observer cannot be replaced entirely. State-of-the-art ATR applications cannot assume in equal measure human perception and interpretation. Why is this still such a critical issue? First, cluttered and noisy images make it difficult to automatically extract, classify and identify object types. Second, due to the changed warfare and the rise of asymmetric threats it is nearly impossible to create an underlying data set containing all features, objects or infrastructure types. Many other reasons like environmental parameters or aspect angles compound the application of ATR supplementary. Due to the lack of suitable ATR procedures, the human factor is still important and so far irreplaceable. In order to use the potential benefits of the human perception and computational methods in a synergistic way, both are unified in an interactive assistance system. RecceMan® (Reconnaissance Manual) offers two different modes for aerial image analysts on missions: the object recognition mode and the infrastructure analysis mode. The aim of the object recognition mode is to recognize a certain object type based on the object features that originated from the image signatures. The infrastructure analysis mode pursues the goal to analyze the function of the infrastructure. The image analyst extracts visually certain target object signatures, assigns them to corresponding object features and is finally able to recognize the object type. The system offers him the possibility to assign the image signatures to features given by sample images. The underlying data set contains a wide range of objects features and object types for different domains like ships or land vehicles. Each domain has its own feature tree developed by aerial image analyst experts. By selecting the corresponding features, the possible solution set of objects is automatically reduced and matches only the objects that contain the selected features. Moreover, we give an outlook of current research in the field of ground target analysis in which we deal with partly automated methods to extract image signatures and assign them to the corresponding features. This research includes methods for automatically determining the orientation of an object and geometric features like width and length of the object. This step enables to reduce automatically the possible object types offered to the image analyst by the interactive recognition assistance system.
PET-Based Confirmation of Orientation Sensitivity of TMS-Induced Cortical Activation in Humans
Krieg, Todd D.; Salinas, Felipe S.; Narayana, Shalini; Fox, Peter T.; Mogul, David J.
2017-01-01
Background Currently, it is difficult to predict precise regions of cortical activation in response to transcranial magnetic stimulation (TMS). Most analytical approaches focus on applied magnetic field strength in the target region as the primary factor, placing activation on the gyral crowns. However, imaging studies support M1 targets being typically located in the sulcal banks. Objective/hypothesis To more thoroughly investigate this inconsistency, we sought to determine whether neocortical surface orientation was a critical determinant of regional activation. Methods MR images were used to construct cortical and scalp surfaces for 18 subjects. The angle (θ) between the cortical surface normal and its nearest scalp normal for ~50,000 cortical points per subject was used to quantify cortical location (i.e., gyral vs. sulcal). TMS-induced activations of primary motor cortex (M1) were compared to brain activations recorded during a finger-tapping task using concurrent positron emission tomographic (PET) imaging. Results Brain activations were primarily sulcal for both the TMS and task activations (P < 0.001 for both) compared to the overall cortical surface orientation. Also, the location of maximal blood flow in response to either TMS or finger-tapping correlated well using the cortical surface orientation angle or distance to scalp (P < 0.001 for both) as criteria for comparison between different neocortical activation modalities. Conclusion This study provides further evidence that a major factor in cortical activation using TMS is the orientation of the cortical surface with respect to the induced electric field. The results show that, despite the gyral crown of the cortex being subjected to a larger magnetic field magnitude, the sulcal bank of M1 had larger cerebral blood flow (CBF) responses during TMS. PMID:23827648
Image classification independent of orientation and scale
NASA Astrophysics Data System (ADS)
Arsenault, Henri H.; Parent, Sebastien; Moisan, Sylvain
1998-04-01
The recognition of targets independently of orientation has become fairly well developed in recent years for in-plane rotation. The out-of-plane rotation problem is much less advanced. When both out-of-plane rotations and changes of scale are present, the problem becomes very difficult. In this paper we describe our research on the combined out-of- plane rotation problem and the scale invariance problem. The rotations were limited to rotations about an axis perpendicular to the line of sight. The objects to be classified were three kinds of military vehicles. The inputs used were infrared imagery and photographs. We used a variation of a method proposed by Neiberg and Casasent, where a neural network is trained with a subset of the database and a minimum distances from lines in feature space are used for classification instead of nearest neighbors. Each line in the feature space corresponds to one class of objects, and points on one line correspond to different orientations of the same target. We found that the training samples needed to be closer for some orientations than for others, and that the most difficult orientations are where the target is head-on to the observer. By means of some additional training of the neural network, we were able to achieve 100% correct classification for 360 degree rotation and a range of scales over a factor of five.
Urban Change Detection of Pingtan City based on Bi-temporal Remote Sensing Images
NASA Astrophysics Data System (ADS)
Degang, JIANG; Jinyan, XU; Yikang, GAO
2017-02-01
In this paper, a pair of SPOT 5-6 images with the resolution of 0.5m is selected. An object-oriented classification method is used to the two images and five classes of ground features were identified as man-made objects, farmland, forest, waterbody and unutilized land. An auxiliary ASTER GDEM was used to improve the classification accuracy. And the change detection based on the classification results was performed. Accuracy assessment was carried out finally. Consequently, satisfactory results were obtained. The results show that great changes of the Pingtan city have been detected as the expansion of the city area and the intensity increase of man-made buildings, roads and other infrastructures with the establishment of Pingtan comprehensive experimental zone. Wide range of open sea area along the island coast zones has been reclaimed for port and CBDs construction.
Diamond Eye: a distributed architecture for image data mining
NASA Astrophysics Data System (ADS)
Burl, Michael C.; Fowlkes, Charless; Roden, Joe; Stechert, Andre; Mukhtar, Saleem
1999-02-01
Diamond Eye is a distributed software architecture, which enables users (scientists) to analyze large image collections by interacting with one or more custom data mining servers via a Java applet interface. Each server is coupled with an object-oriented database and a computational engine, such as a network of high-performance workstations. The database provides persistent storage and supports querying of the 'mined' information. The computational engine provides parallel execution of expensive image processing, object recognition, and query-by-content operations. Key benefits of the Diamond Eye architecture are: (1) the design promotes trial evaluation of advanced data mining and machine learning techniques by potential new users (all that is required is to point a web browser to the appropriate URL), (2) software infrastructure that is common across a range of science mining applications is factored out and reused, and (3) the system facilitates closer collaborations between algorithm developers and domain experts.
Thoe, Robert S.
1991-01-01
Method and apparatus for producing sharp, chromatic, magnified images of X-ray emitting objects, are provided. The apparatus, which constitutes an X-ray microscope or telescope, comprises a connected collection of Bragg reflecting planes, comprised of either a bent crystal or a synthetic multilayer structure, disposed on and adjacent to a locus determined by a spherical surface. The individual Bragg planes are spatially oriented to Bragg reflect radiation from the object location toward the image location. This is accomplished by making the Bragg planes spatially coincident with the surfaces of either a nested series of prolate ellipsoids of revolution, or a nested series of spheres. The spacing between the Bragg reflecting planes can be tailored to control the wavelengths and the amount of the X-radiation that is Bragg reflected to form the X-ray image.
A Novel Optical/digital Processing System for Pattern Recognition
NASA Technical Reports Server (NTRS)
Boone, Bradley G.; Shukla, Oodaye B.
1993-01-01
This paper describes two processing algorithms that can be implemented optically: the Radon transform and angular correlation. These two algorithms can be combined in one optical processor to extract all the basic geometric and amplitude features from objects embedded in video imagery. We show that the internal amplitude structure of objects is recovered by the Radon transform, which is a well-known result, but, in addition, we show simulation results that calculate angular correlation, a simple but unique algorithm that extracts object boundaries from suitably threshold images from which length, width, area, aspect ratio, and orientation can be derived. In addition to circumventing scale and rotation distortions, these simulations indicate that the features derived from the angular correlation algorithm are relatively insensitive to tracking shifts and image noise. Some optical architecture concepts, including one based on micro-optical lenslet arrays, have been developed to implement these algorithms. Simulation test and evaluation using simple synthetic object data will be described, including results of a study that uses object boundaries (derivable from angular correlation) to classify simple objects using a neural network.
Design and applications of a multimodality image data warehouse framework.
Wong, Stephen T C; Hoo, Kent Soo; Knowlton, Robert C; Laxer, Kenneth D; Cao, Xinhau; Hawkins, Randall A; Dillon, William P; Arenson, Ronald L
2002-01-01
A comprehensive data warehouse framework is needed, which encompasses imaging and non-imaging information in supporting disease management and research. The authors propose such a framework, describe general design principles and system architecture, and illustrate a multimodality neuroimaging data warehouse system implemented for clinical epilepsy research. The data warehouse system is built on top of a picture archiving and communication system (PACS) environment and applies an iterative object-oriented analysis and design (OOAD) approach and recognized data interface and design standards. The implementation is based on a Java CORBA (Common Object Request Broker Architecture) and Web-based architecture that separates the graphical user interface presentation, data warehouse business services, data staging area, and backend source systems into distinct software layers. To illustrate the practicality of the data warehouse system, the authors describe two distinct biomedical applications--namely, clinical diagnostic workup of multimodality neuroimaging cases and research data analysis and decision threshold on seizure foci lateralization. The image data warehouse framework can be modified and generalized for new application domains.
Design and Applications of a Multimodality Image Data Warehouse Framework
Wong, Stephen T.C.; Hoo, Kent Soo; Knowlton, Robert C.; Laxer, Kenneth D.; Cao, Xinhau; Hawkins, Randall A.; Dillon, William P.; Arenson, Ronald L.
2002-01-01
A comprehensive data warehouse framework is needed, which encompasses imaging and non-imaging information in supporting disease management and research. The authors propose such a framework, describe general design principles and system architecture, and illustrate a multimodality neuroimaging data warehouse system implemented for clinical epilepsy research. The data warehouse system is built on top of a picture archiving and communication system (PACS) environment and applies an iterative object-oriented analysis and design (OOAD) approach and recognized data interface and design standards. The implementation is based on a Java CORBA (Common Object Request Broker Architecture) and Web-based architecture that separates the graphical user interface presentation, data warehouse business services, data staging area, and backend source systems into distinct software layers. To illustrate the practicality of the data warehouse system, the authors describe two distinct biomedical applications—namely, clinical diagnostic workup of multimodality neuroimaging cases and research data analysis and decision threshold on seizure foci lateralization. The image data warehouse framework can be modified and generalized for new application domains. PMID:11971885
Exploiting core knowledge for visual object recognition.
Schurgin, Mark W; Flombaum, Jonathan I
2017-03-01
Humans recognize thousands of objects, and with relative tolerance to variable retinal inputs. The acquisition of this ability is not fully understood, and it remains an area in which artificial systems have yet to surpass people. We sought to investigate the memory process that supports object recognition. Specifically, we investigated the association of inputs that co-occur over short periods of time. We tested the hypothesis that human perception exploits expectations about object kinematics to limit the scope of association to inputs that are likely to have the same token as a source. In several experiments we exposed participants to images of objects, and we then tested recognition sensitivity. Using motion, we manipulated whether successive encounters with an image took place through kinematics that implied the same or a different token as the source of those encounters. Images were injected with noise, or shown at varying orientations, and we included 2 manipulations of motion kinematics. Across all experiments, memory performance was better for images that had been previously encountered with kinematics that implied a single token. A model-based analysis similarly showed greater memory strength when images were shown via kinematics that implied a single token. These results suggest that constraints from physics are built into the mechanisms that support memory about objects. Such constraints-often characterized as 'Core Knowledge'-are known to support perception and cognition broadly, even in young infants. But they have never been considered as a mechanism for memory with respect to recognition. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
NASA Technical Reports Server (NTRS)
Boyer, K. L.; Wuescher, D. M.; Sarkar, S.
1991-01-01
Dynamic edge warping (DEW), a technique for recovering reasonably accurate disparity maps from uncalibrated stereo image pairs, is presented. No precise knowledge of the epipolar camera geometry is assumed. The technique is embedded in a system including structural stereopsis on the front end and robust estimation in digital photogrammetry on the other for the purpose of self-calibrating stereo image pairs. Once the relative camera orientation is known, the epipolar geometry is computed and the system can use this information to refine its representation of the object space. Such a system will find application in the autonomous extraction of terrain maps from stereo aerial photographs, for which camera position and orientation are unknown a priori, and for online autonomous calibration maintenance for robotic vision applications, in which the cameras are subject to vibration and other physical disturbances after calibration. This work thus forms a component of an intelligent system that begins with a pair of images and, having only vague knowledge of the conditions under which they were acquired, produces an accurate, dense, relative depth map. The resulting disparity map can also be used directly in some high-level applications involving qualitative scene analysis, spatial reasoning, and perceptual organization of the object space. The system as a whole substitutes high-level information and constraints for precise geometric knowledge in driving and constraining the early correspondence process.
Minimal camera networks for 3D image based modeling of cultural heritage objects.
Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma
2014-03-25
3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue "Lamassu". Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883-859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm.
Minimal Camera Networks for 3D Image Based Modeling of Cultural Heritage Objects
Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma
2014-01-01
3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue “Lamassu”. Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883–859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm. PMID:24670718
Real-Space x-ray tomographic reconstruction of randomly oriented objects with sparse data frames.
Ayyer, Kartik; Philipp, Hugh T; Tate, Mark W; Elser, Veit; Gruner, Sol M
2014-02-10
Schemes for X-ray imaging single protein molecules using new x-ray sources, like x-ray free electron lasers (XFELs), require processing many frames of data that are obtained by taking temporally short snapshots of identical molecules, each with a random and unknown orientation. Due to the small size of the molecules and short exposure times, average signal levels of much less than 1 photon/pixel/frame are expected, much too low to be processed using standard methods. One approach to process the data is to use statistical methods developed in the EMC algorithm (Loh & Elser, Phys. Rev. E, 2009) which processes the data set as a whole. In this paper we apply this method to a real-space tomographic reconstruction using sparse frames of data (below 10(-2) photons/pixel/frame) obtained by performing x-ray transmission measurements of a low-contrast, randomly-oriented object. This extends the work by Philipp et al. (Optics Express, 2012) to three dimensions and is one step closer to the single molecule reconstruction problem.
VIMOS Instrument Control Software Design: an Object Oriented Approach
NASA Astrophysics Data System (ADS)
Brau-Nogué, Sylvie; Lucuix, Christian
2002-12-01
The Franco-Italian VIMOS instrument is a VIsible imaging Multi-Object Spectrograph with outstanding multiplex capabilities, allowing to take spectra of more than 800 objects simultaneously, or integral field spectroscopy mode in a 54x54 arcsec area. VIMOS is being installed at the Nasmyth focus of the third Unit Telescope of the European Southern Observatory Very Large Telescope (VLT) at Mount Paranal in Chile. This paper will describe the analysis, the design and the implementation of the VIMOS Instrument Control System, using UML notation. Our Control group followed an Object Oriented software process while keeping in mind the ESO VLT standard control concepts. At ESO VLT a complete software library is available. Rather than applying waterfall lifecycle, ICS project used iterative development, a lifecycle consisting of several iterations. Each iteration consisted in : capture and evaluate the requirements, visual modeling for analysis and design, implementation, test, and deployment. Depending of the project phases, iterations focused more or less on specific activity. The result is an object model (the design model), including use-case realizations. An implementation view and a deployment view complement this product. An extract of VIMOS ICS UML model will be presented and some implementation, integration and test issues will be discussed.
Imaging informatics for consumer health: towards a radiology patient portal
Arnold, Corey W; McNamara, Mary; El-Saden, Suzie; Chen, Shawn; Taira, Ricky K; Bui, Alex A T
2013-01-01
Objective With the increased routine use of advanced imaging in clinical diagnosis and treatment, it has become imperative to provide patients with a means to view and understand their imaging studies. We illustrate the feasibility of a patient portal that automatically structures and integrates radiology reports with corresponding imaging studies according to several information orientations tailored for the layperson. Methods The imaging patient portal is composed of an image processing module for the creation of a timeline that illustrates the progression of disease, a natural language processing module to extract salient concepts from radiology reports (73% accuracy, F1 score of 0.67), and an interactive user interface navigable by an imaging findings list. The portal was developed as a Java-based web application and is demonstrated for patients with brain cancer. Results and discussion The system was exhibited at an international radiology conference to solicit feedback from a diverse group of healthcare professionals. There was wide support for educating patients about their imaging studies, and an appreciation for the informatics tools used to simplify images and reports for consumer interpretation. Primary concerns included the possibility of patients misunderstanding their results, as well as worries regarding accidental improper disclosure of medical information. Conclusions Radiologic imaging composes a significant amount of the evidence used to make diagnostic and treatment decisions, yet there are few tools for explaining this information to patients. The proposed radiology patient portal provides a framework for organizing radiologic results into several information orientations to support patient education. PMID:23739614
Al-Janabi, Shahd; Greenberg, Adam S
2016-10-01
The representational basis of attentional selection can be object-based. Various studies have suggested, however, that object-based selection is less robust than spatial selection across experimental paradigms. We sought to examine the manner by which the following factors might explain this variation: Target-Object Integration (targets 'on' vs. part 'of' an object), Attention Distribution (narrow vs. wide), and Object Orientation (horizontal vs. vertical). In Experiment 1, participants discriminated between two targets presented 'on' an object in one session, or presented as a change 'of' an object in another session. There was no spatial cue-thus, attention was initially focused widely-and the objects were horizontal or vertical. We found evidence of object-based selection only when targets constituted a change 'of' an object. Additionally, object orientation modulated the sign of object-based selection: We observed a same-object advantage for horizontal objects, but a same-object cost for vertical objects. In Experiment 2, an informative cue preceded a single target presented 'on' an object or as a change 'of' an object (thus, attention was initially focused narrowly). Unlike in Experiment 1, we found evidence of object-based selection independent of target-object integration. We again found that the sign of selection was modulated by the objects' orientation. This result may reflect a meridian effect, which emerged due to anisotropies in the cortical representations when attention is oriented endogenously. Experiment 3 revealed that object orientation did not modulate object-based selection when attention was oriented exogenously. Our findings suggest that target-object integration, attention distribution, and object orientation modulate object-based selection, but only in combination.
Xue, Zhong; Li, Hai; Guo, Lei; Wong, Stephen T.C.
2010-01-01
It is a key step to spatially align diffusion tensor images (DTI) to quantitatively compare neural images obtained from different subjects or the same subject at different timepoints. Different from traditional scalar or multi-channel image registration methods, tensor orientation should be considered in DTI registration. Recently, several DTI registration methods have been proposed in the literature, but deformation fields are purely dependent on the tensor features not the whole tensor information. Other methods, such as the piece-wise affine transformation and the diffeomorphic non-linear registration algorithms, use analytical gradients of the registration objective functions by simultaneously considering the reorientation and deformation of tensors during the registration. However, only relatively local tensor information such as voxel-wise tensor-similarity, is utilized. This paper proposes a new DTI image registration algorithm, called local fast marching (FM)-based simultaneous registration. The algorithm not only considers the orientation of tensors during registration but also utilizes the neighborhood tensor information of each voxel to drive the deformation, and such neighborhood tensor information is extracted from a local fast marching algorithm around the voxels of interest. These local fast marching-based tensor features efficiently reflect the diffusion patterns around each voxel within a spherical neighborhood and can capture relatively distinctive features of the anatomical structures. Using simulated and real DTI human brain data the experimental results show that the proposed algorithm is more accurate compared with the FA-based registration and is more efficient than its counterpart, the neighborhood tensor similarity-based registration. PMID:20382233
The neural basis of precise visual short-term memory for complex recognisable objects.
Veldsman, Michele; Mitchell, Daniel J; Cusack, Rhodri
2017-10-01
Recent evidence suggests that visual short-term memory (VSTM) capacity estimated using simple objects, such as colours and oriented bars, may not generalise well to more naturalistic stimuli. More visual detail can be stored in VSTM when complex, recognisable objects are maintained compared to simple objects. It is not yet known if it is recognisability that enhances memory precision, nor whether maintenance of recognisable objects is achieved with the same network of brain regions supporting maintenance of simple objects. We used a novel stimulus generation method to parametrically warp photographic images along a continuum, allowing separate estimation of the precision of memory representations and the number of items retained. The stimulus generation method was also designed to create unrecognisable, though perceptually matched, stimuli, to investigate the impact of recognisability on VSTM. We adapted the widely-used change detection and continuous report paradigms for use with complex, photographic images. Across three functional magnetic resonance imaging (fMRI) experiments, we demonstrated greater precision for recognisable objects in VSTM compared to unrecognisable objects. This clear behavioural advantage was not the result of recruitment of additional brain regions, or of stronger mean activity within the core network. Representational similarity analysis revealed greater variability across item repetitions in the representations of recognisable, compared to unrecognisable complex objects. We therefore propose that a richer range of neural representations support VSTM for complex recognisable objects. Copyright © 2017 Elsevier Inc. All rights reserved.
Vision based object pose estimation for mobile robots
NASA Technical Reports Server (NTRS)
Wu, Annie; Bidlack, Clint; Katkere, Arun; Feague, Roy; Weymouth, Terry
1994-01-01
Mobile robot navigation using visual sensors requires that a robot be able to detect landmarks and obtain pose information from a camera image. This paper presents a vision system for finding man-made markers of known size and calculating the pose of these markers. The algorithm detects and identifies the markers using a weighted pattern matching template. Geometric constraints are then used to calculate the position of the markers relative to the robot. The selection of geometric constraints comes from the typical pose of most man-made signs, such as the sign standing vertical and the dimensions of known size. This system has been tested successfully on a wide range of real images. Marker detection is reliable, even in cluttered environments, and under certain marker orientations, estimation of the orientation has proven accurate to within 2 degrees, and distance estimation to within 0.3 meters.
Noncontact orientation of objects in three-dimensional space using magnetic levitation
Subramaniam, Anand Bala; Yang, Dian; Yu, Hai-Dong; Nemiroski, Alex; Tricard, Simon; Ellerbee, Audrey K.; Soh, Siowling; Whitesides, George M.
2014-01-01
This paper describes several noncontact methods of orienting objects in 3D space using Magnetic Levitation (MagLev). The methods use two permanent magnets arranged coaxially with like poles facing and a container containing a paramagnetic liquid in which the objects are suspended. Absent external forcing, objects levitating in the device adopt predictable static orientations; the orientation depends on the shape and distribution of mass within the objects. The orientation of objects of uniform density in the MagLev device shows a sharp geometry-dependent transition: an analytical theory rationalizes this transition and predicts the orientation of objects in the MagLev device. Manipulation of the orientation of the levitating objects in space is achieved in two ways: (i) by rotating and/or translating the MagLev device while the objects are suspended in the paramagnetic solution between the magnets; (ii) by moving a small external magnet close to the levitating objects while keeping the device stationary. Unlike mechanical agitation or robotic selection, orienting using MagLev is possible for objects having a range of different physical characteristics (e.g., different shapes, sizes, and mechanical properties from hard polymers to gels and fluids). MagLev thus has the potential to be useful for sorting and positioning components in 3D space, orienting objects for assembly, constructing noncontact devices, and assembling objects composed of soft materials such as hydrogels, elastomers, and jammed granular media. PMID:25157136
Noncontact orientation of objects in three-dimensional space using magnetic levitation.
Subramaniam, Anand Bala; Yang, Dian; Yu, Hai-Dong; Nemiroski, Alex; Tricard, Simon; Ellerbee, Audrey K; Soh, Siowling; Whitesides, George M
2014-09-09
This paper describes several noncontact methods of orienting objects in 3D space using Magnetic Levitation (MagLev). The methods use two permanent magnets arranged coaxially with like poles facing and a container containing a paramagnetic liquid in which the objects are suspended. Absent external forcing, objects levitating in the device adopt predictable static orientations; the orientation depends on the shape and distribution of mass within the objects. The orientation of objects of uniform density in the MagLev device shows a sharp geometry-dependent transition: an analytical theory rationalizes this transition and predicts the orientation of objects in the MagLev device. Manipulation of the orientation of the levitating objects in space is achieved in two ways: (i) by rotating and/or translating the MagLev device while the objects are suspended in the paramagnetic solution between the magnets; (ii) by moving a small external magnet close to the levitating objects while keeping the device stationary. Unlike mechanical agitation or robotic selection, orienting using MagLev is possible for objects having a range of different physical characteristics (e.g., different shapes, sizes, and mechanical properties from hard polymers to gels and fluids). MagLev thus has the potential to be useful for sorting and positioning components in 3D space, orienting objects for assembly, constructing noncontact devices, and assembling objects composed of soft materials such as hydrogels, elastomers, and jammed granular media.
Fu, Jianwei; Yang, Xiaoquan; Wang, Kan; Luo, Qingming; Gong, Hui
2011-12-01
A combined system of fluorescence molecular tomography and microcomputed tomography (FMT&mCT) can provide molecular and anatomical information of small animals in a single study with intrinsically coregistered images. The anatomical information provided by the mCT subsystem is commonly used as a reference to locate the fluorophore distribution or as a priori structural information to improve the performance of FMT. Therefore, the transformation between the coordinate systems of the subsystem needs to be determined in advanced. A cocalibration method for the combined system of FMT&mCT is proposed. First, linear models are adopted to describe the galvano mirrors and the charge-coupled device (CCD) camera in the FMT subsystem. Second, the position and orientation of the galvano mirrors are determined with the input voltages of the galvano mirrors and the markers, whose positions are predetermined. The position, orientation and normalized pixel size of the CCD camera are obtained by analysing the projections of a point-like marker at different positions. Finally, the orientation and position of sources and the corresponding relationship between the detectors and their projections on the image plane are predicted. Because the positions of the markers are acquired with mCT, the registration of the FMT and mCT could be realized by direct image fusion. The accuracy and consistency of this method in the presence of noise is evaluated by computer simulation. Next, a practical implementation for an experimental FMT&mCT system is carried out and validated. The maximum prediction error of the source positions on the surface of a cylindrical phantom is within 0.375 mm and that of the projections of a point-like marker is within 0.629 pixel. Finally, imaging experiments of the fluorophore distribution in a cylindrical phantom and a phantom with a complex shape demonstrate the feasibility of the proposed method. This method is universal in FMT&mCT, which could be performed with no restriction on the system geometry, calibration phantoms or imaging objects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu Jianwei; Yang Xiaoquan; Wang Kan
2011-12-15
Purpose: A combined system of fluorescence molecular tomography and microcomputed tomography (FMT and mCT) can provide molecular and anatomical information of small animals in a single study with intrinsically coregistered images. The anatomical information provided by the mCT subsystem is commonly used as a reference to locate the fluorophore distribution or as a priori structural information to improve the performance of FMT. Therefore, the transformation between the coordinate systems of the subsystem needs to be determined in advanced. Methods: A cocalibration method for the combined system of FMT and mCT is proposed. First, linear models are adopted to describe themore » galvano mirrors and the charge-coupled device (CCD) camera in the FMT subsystem. Second, the position and orientation of the galvano mirrors are determined with the input voltages of the galvano mirrors and the markers, whose positions are predetermined. The position, orientation and normalized pixel size of the CCD camera are obtained by analysing the projections of a point-like marker at different positions. Finally, the orientation and position of sources and the corresponding relationship between the detectors and their projections on the image plane are predicted. Because the positions of the markers are acquired with mCT, the registration of the FMT and mCT could be realized by direct image fusion. Results: The accuracy and consistency of this method in the presence of noise is evaluated by computer simulation. Next, a practical implementation for an experimental FMT and mCT system is carried out and validated. The maximum prediction error of the source positions on the surface of a cylindrical phantom is within 0.375 mm and that of the projections of a point-like marker is within 0.629 pixel. Finally, imaging experiments of the fluorophore distribution in a cylindrical phantom and a phantom with a complex shape demonstrate the feasibility of the proposed method. Conclusions: This method is universal in FMT and mCT, which could be performed with no restriction on the system geometry, calibration phantoms or imaging objects.« less
Object detection via eye tracking and fringe restraint
NASA Astrophysics Data System (ADS)
Pan, Fei; Zhang, Hanming; Zeng, Ying; Tong, Li; Yan, Bin
2017-07-01
Object detection is a computer vision problem which caught a large amount of attention. But the candidate boundingboxes extracted from only image features may end up with false-detection due to the semantic gap between the top-down and the bottom up information. In this paper, we propose a novel method for generating object bounding-boxes proposals using the combination of eye fixation point, saliency detection and edges. The new method obtains a fixation orientated Gaussian map, optimizes the map through single-layer cellular automata, and derives bounding-boxes from the optimized map on three levels. Then we score the boxes by combining all the information above, and choose the box with the highest score to be the final box. We perform an evaluation of our method by comparing with previous state-ofthe art approaches on the challenging POET datasets, the images of which are chosen from PASCAL VOC 2012. Our method outperforms them on small scale objects while comparable to them in general.
Aerial vehicles collision avoidance using monocular vision
NASA Astrophysics Data System (ADS)
Balashov, Oleg; Muraviev, Vadim; Strotov, Valery
2016-10-01
In this paper image-based collision avoidance algorithm that provides detection of nearby aircraft and distance estimation is presented. The approach requires a vision system with a single moving camera and additional information about carrier's speed and orientation from onboard sensors. The main idea is to create a multi-step approach based on a preliminary detection, regions of interest (ROI) selection, contour segmentation, object matching and localization. The proposed algorithm is able to detect small targets but unlike many other approaches is designed to work with large-scale objects as well. To localize aerial vehicle position the system of equations relating object coordinates in space and observed image is solved. The system solution gives the current position and speed of the detected object in space. Using this information distance and time to collision can be estimated. Experimental research on real video sequences and modeled data is performed. Video database contained different types of aerial vehicles: aircrafts, helicopters, and UAVs. The presented algorithm is able to detect aerial vehicles from several kilometers under regular daylight conditions.
2D and 3D X-ray phase retrieval of multi-material objects using a single defocus distance.
Beltran, M A; Paganin, D M; Uesugi, K; Kitchen, M J
2010-03-29
A method of tomographic phase retrieval is developed for multi-material objects whose components each has a distinct complex refractive index. The phase-retrieval algorithm, based on the Transport-of-Intensity equation, utilizes propagation-based X-ray phase contrast images acquired at a single defocus distance for each tomographic projection. The method requires a priori knowledge of the complex refractive index for each material present in the sample, together with the total projected thickness of the object at each orientation. The requirement of only a single defocus distance per projection simplifies the experimental setup and imposes no additional dose compared to conventional tomography. The algorithm was implemented using phase contrast data acquired at the SPring-8 Synchrotron facility in Japan. The three-dimensional (3D) complex refractive index distribution of a multi-material test object was quantitatively reconstructed using a single X-ray phase-contrast image per projection. The technique is robust in the presence of noise, compared to conventional absorption based tomography.
The artificial object detection and current velocity measurement using SAR ocean surface images
NASA Astrophysics Data System (ADS)
Alpatov, Boris; Strotov, Valery; Ershov, Maksim; Muraviev, Vadim; Feldman, Alexander; Smirnov, Sergey
2017-10-01
Due to the fact that water surface covers wide areas, remote sensing is the most appropriate way of getting information about ocean environment for vessel tracking, security purposes, ecological studies and others. Processing of synthetic aperture radar (SAR) images is extensively used for control and monitoring of the ocean surface. Image data can be acquired from Earth observation satellites, such as TerraSAR-X, ERS, and COSMO-SkyMed. Thus, SAR image processing can be used to solve many problems arising in this field of research. This paper discusses some of them including ship detection, oil pollution control and ocean currents mapping. Due to complexity of the problem several specialized algorithm are necessary to develop. The oil spill detection algorithm consists of the following main steps: image preprocessing, detection of dark areas, parameter extraction and classification. The ship detection algorithm consists of the following main steps: prescreening, land masking, image segmentation combined with parameter measurement, ship orientation estimation and object discrimination. The proposed approach to ocean currents mapping is based on Doppler's law. The results of computer modeling on real SAR images are presented. Based on these results it is concluded that the proposed approaches can be used in maritime applications.
New technologies lead to a new frontier: cognitive multiple data representation
NASA Astrophysics Data System (ADS)
Buffat, S.; Liege, F.; Plantier, J.; Roumes, C.
2005-05-01
The increasing number and complexity of operational sensors (radar, infrared, hyperspectral...) and availability of huge amount of data, lead to more and more sophisticated information presentations. But one key element of the IMINT line cannot be improved beyond initial system specification: the operator.... In order to overcome this issue, we have to better understand human visual object representation. Object recognition theories in human vision balance between matching 2D templates representation with viewpoint-dependant information, and a viewpoint-invariant system based on structural description. Spatial frequency content is relevant due to early vision filtering. Orientation in depth is an important variable to challenge object constancy. Three objects, seen from three different points of view in a natural environment made the original images in this study. Test images were a combination of spatial frequency filtered original images and an additive contrast level of white noise. In the first experiment, the observer's task was a same versus different forced choice with spatial alternative. Test images had the same noise level in a presentation row. Discrimination threshold was determined by modifying the white noise contrast level by means of an adaptative method. In the second experiment, a repetition blindness paradigm was used to further investigate the viewpoint effect on object recognition. The results shed some light on the human visual system processing of objects displayed under different physical descriptions. This is an important achievement because targets which not always match physical properties of usual visual stimuli can increase operational workload.
Assigning Main Orientation to an EOH Descriptor on Multispectral Images.
Li, Yong; Shi, Xiang; Wei, Lijun; Zou, Junwei; Chen, Fang
2015-07-01
This paper proposes an approach to compute an EOH (edge-oriented histogram) descriptor with main orientation. EOH has a better matching ability than SIFT (scale-invariant feature transform) on multispectral images, but does not assign a main orientation to keypoints. Alternatively, it tends to assign the same main orientation to every keypoint, e.g., zero degrees. This limits EOH to matching keypoints between images of translation misalignment only. Observing this limitation, we propose assigning to keypoints the main orientation that is computed with PIIFD (partial intensity invariant feature descriptor). In the proposed method, SIFT keypoints are detected from images as the extrema of difference of Gaussians, and every keypoint is assigned to the main orientation computed with PIIFD. Then, EOH is computed for every keypoint with respect to its main orientation. In addition, an implementation variant is proposed for fast computation of the EOH descriptor. Experimental results show that the proposed approach performs more robustly than the original EOH on image pairs that have a rotation misalignment.
ERIC Educational Resources Information Center
Ives, William; Rovet, Joanne
1979-01-01
Reports three experiments which investigate: whether familiar objects have standard graphic orientations (Experiment 1); the relationship between use of object orientations and more conventional methods in depicting familiar objects in motion (Experiment 2); and whether orientations are used differently in novel objects whose only defining feature…
Learning Photogrammetry with Interactive Software Tool PhoX
NASA Astrophysics Data System (ADS)
Luhmann, T.
2016-06-01
Photogrammetry is a complex topic in high-level university teaching, especially in the fields of geodesy, geoinformatics and metrology where high quality results are demanded. In addition, more and more black-box solutions for 3D image processing and point cloud generation are available that generate nice results easily, e.g. by structure-from-motion approaches. Within this context, the classical approach of teaching photogrammetry (e.g. focusing on aerial stereophotogrammetry) has to be reformed in order to educate students and professionals with new topics and provide them with more information behind the scene. Since around 20 years photogrammetry courses at the Jade University of Applied Sciences in Oldenburg, Germany, include the use of digital photogrammetry software that provide individual exercises, deep analysis of calculation results and a wide range of visualization tools for almost all standard tasks in photogrammetry. During the last years the software package PhoX has been developed that is part of a new didactic concept in photogrammetry and related subjects. It also serves as analysis tool in recent research projects. PhoX consists of a project-oriented data structure for images, image data, measured points and features and 3D objects. It allows for almost all basic photogrammetric measurement tools, image processing, calculation methods, graphical analysis functions, simulations and much more. Students use the program in order to conduct predefined exercises where they have the opportunity to analyse results in a high level of detail. This includes the analysis of statistical quality parameters but also the meaning of transformation parameters, rotation matrices, calibration and orientation data. As one specific advantage, PhoX allows for the interactive modification of single parameters and the direct view of the resulting effect in image or object space.
Automatic Camera Calibration for Cultural Heritage Applications Using Unstructured Planar Objects
NASA Astrophysics Data System (ADS)
Adam, K.; Kalisperakis, I.; Grammatikopoulos, L.; Karras, G.; Petsa, E.
2013-07-01
As a rule, image-based documentation of cultural heritage relies today on ordinary digital cameras and commercial software. As such projects often involve researchers not familiar with photogrammetry, the question of camera calibration is important. Freely available open-source user-friendly software for automatic camera calibration, often based on simple 2D chess-board patterns, are an answer to the demand for simplicity and automation. However, such tools cannot respond to all requirements met in cultural heritage conservation regarding possible imaging distances and focal lengths. Here we investigate the practical possibility of camera calibration from unknown planar objects, i.e. any planar surface with adequate texture; we have focused on the example of urban walls covered with graffiti. Images are connected pair-wise with inter-image homographies, which are estimated automatically through a RANSAC-based approach after extracting and matching interest points with the SIFT operator. All valid points are identified on all images on which they appear. Provided that the image set includes a "fronto-parallel" view, inter-image homographies with this image are regarded as emulations of image-to-world homographies and allow computing initial estimates for the interior and exterior orientation elements. Following this initialization step, the estimates are introduced into a final self-calibrating bundle adjustment. Measures are taken to discard unsuitable images and verify object planarity. Results from practical experimentation indicate that this method may produce satisfactory results. The authors intend to incorporate the described approach into their freely available user-friendly software tool, which relies on chess-boards, to assist non-experts in their projects with image-based approaches.
An insect-inspired model for visual binding II: functional analysis and visual attention.
Northcutt, Brandon D; Higgins, Charles M
2017-04-01
We have developed a neural network model capable of performing visual binding inspired by neuronal circuitry in the optic glomeruli of flies: a brain area that lies just downstream of the optic lobes where early visual processing is performed. This visual binding model is able to detect objects in dynamic image sequences and bind together their respective characteristic visual features-such as color, motion, and orientation-by taking advantage of their common temporal fluctuations. Visual binding is represented in the form of an inhibitory weight matrix which learns over time which features originate from a given visual object. In the present work, we show that information represented implicitly in this weight matrix can be used to explicitly count the number of objects present in the visual image, to enumerate their specific visual characteristics, and even to create an enhanced image in which one particular object is emphasized over others, thus implementing a simple form of visual attention. Further, we present a detailed analysis which reveals the function and theoretical limitations of the visual binding network and in this context describe a novel network learning rule which is optimized for visual binding.
New method for identifying features of an image on a digital video display
NASA Astrophysics Data System (ADS)
Doyle, Michael D.
1991-04-01
The MetaMap process extends the concept of direct manipulation human-computer interfaces to new limits. Its specific capabilities include the correlation of discrete image elements to relevant text information and the correlation of these image features to other images as well as to program control mechanisms. The correlation is accomplished through reprogramming of both the color map and the image so that discrete image elements comprise unique sets of color indices. This process allows the correlation to be accomplished with very efficient data storage and program execution times. Image databases adapted to this process become object-oriented as a result. Very sophisticated interrelationships can be set up between images text and program control mechanisms using this process. An application of this interfacing process to the design of an interactive atlas of medical histology as well as other possible applications are described. The MetaMap process is protected by U. S. patent #4
Adaptive Morphological Feature-Based Object Classifier for a Color Imaging System
NASA Technical Reports Server (NTRS)
McDowell, Mark; Gray, Elizabeth
2009-01-01
Utilizing a Compact Color Microscope Imaging System (CCMIS), a unique algorithm has been developed that combines human intelligence along with machine vision techniques to produce an autonomous microscope tool for biomedical, industrial, and space applications. This technique is based on an adaptive, morphological, feature-based mapping function comprising 24 mutually inclusive feature metrics that are used to determine the metrics for complex cell/objects derived from color image analysis. Some of the features include: Area (total numbers of non-background pixels inside and including the perimeter), Bounding Box (smallest rectangle that bounds and object), centerX (x-coordinate of intensity-weighted, center-of-mass of an entire object or multi-object blob), centerY (y-coordinate of intensity-weighted, center-of-mass, of an entire object or multi-object blob), Circumference (a measure of circumference that takes into account whether neighboring pixels are diagonal, which is a longer distance than horizontally or vertically joined pixels), . Elongation (measure of particle elongation given as a number between 0 and 1. If equal to 1, the particle bounding box is square. As the elongation decreases from 1, the particle becomes more elongated), . Ext_vector (extremal vector), . Major Axis (the length of a major axis of a smallest ellipse encompassing an object), . Minor Axis (the length of a minor axis of a smallest ellipse encompassing an object), . Partial (indicates if the particle extends beyond the field of view), . Perimeter Points (points that make up a particle perimeter), . Roundness [(4(pi) x area)/perimeter(squared)) the result is a measure of object roundness, or compactness, given as a value between 0 and 1. The greater the ratio, the rounder the object.], . Thin in center (determines if an object becomes thin in the center, (figure-eight-shaped), . Theta (orientation of the major axis), . Smoothness and color metrics for each component (red, green, blue) the minimum, maximum, average, and standard deviation within the particle are tracked. These metrics can be used for autonomous analysis of color images from a microscope, video camera, or digital, still image. It can also automatically identify tumor morphology of stained images and has been used to detect stained cell phenomena (see figure).
An object-oriented description method of EPMM process
NASA Astrophysics Data System (ADS)
Jiang, Zuo; Yang, Fan
2017-06-01
In order to use the object-oriented mature tools and language in software process model, make the software process model more accord with the industrial standard, it’s necessary to study the object-oriented modelling of software process. Based on the formal process definition in EPMM, considering the characteristics that Petri net is mainly formal modelling tool and combining the Petri net modelling with the object-oriented modelling idea, this paper provides this implementation method to convert EPMM based on Petri net into object models based on object-oriented description.
Monte Carlo simulations in X-ray imaging
NASA Astrophysics Data System (ADS)
Giersch, Jürgen; Durst, Jürgen
2008-06-01
Monte Carlo simulations have become crucial tools in many fields of X-ray imaging. They help to understand the influence of physical effects such as absorption, scattering and fluorescence of photons in different detector materials on image quality parameters. They allow studying new imaging concepts like photon counting, energy weighting or material reconstruction. Additionally, they can be applied to the fields of nuclear medicine to define virtual setups studying new geometries or image reconstruction algorithms. Furthermore, an implementation of the propagation physics of electrons and photons allows studying the behavior of (novel) X-ray generation concepts. This versatility of Monte Carlo simulations is illustrated with some examples done by the Monte Carlo simulation ROSI. An overview of the structure of ROSI is given as an example of a modern, well-proven, object-oriented, parallel computing Monte Carlo simulation for X-ray imaging.
Sensitivity to spatial frequency content is not specific to face perception
Williams, N. Rankin; Willenbockel, Verena; Gauthier, Isabel
2010-01-01
Prior work using a matching task between images that were complementary in spatial frequency and orientation information suggested that the representation of faces, but not objects, retains low-level spatial frequency (SF) information (Biederman & Kalocsai. 1997). In two experiments, we reexamine the claim that faces are uniquely sensitive to changes in SF. In contrast to prior work, we used a design allowing the computation of sensitivity and response criterion for each category, and in one experiment, equalized low-level image properties across object categories. In both experiments, we find that observers are sensitive to SF changes for upright and inverted faces and nonface objects. Differential response biases across categories contributed to a larger sensitivity for faces, but even sensitivity showed a larger effect for faces, especially when faces were upright and in a front-facing view. However, when objects were inverted, or upright but shown in a three-quarter view, the matching of objects and faces was equally sensitive to SF changes. Accordingly, face perception does not appear to be uniquely affected by changes in SF content. PMID:19576237
NASA Astrophysics Data System (ADS)
Thompson, Errol; Kinshuk
2011-09-01
Object-oriented programming is seen as a difficult skill to master. There is considerable debate about the most appropriate way to introduce novice programmers to object-oriented concepts. Is it possible to uncover what the critical aspects or features are that enhance the learning of object-oriented programming? Practitioners have differing understandings of the nature of an object-oriented program. Uncovering these different ways of understanding leads to agreater understanding of the critical aspects and their relationship tothe structure of the program produced. A phenomenographic studywas conducted to uncover practitioner understandings of the nature of an object-oriented program. The study identified five levels of understanding and three dimensions of variation within these levels. These levels and dimensions of variation provide a framework for fostering conceptual change with respect to the nature of an object-oriented program.
Shribak, Michael; Larkin, Kieran G.; Biggs, David
2017-01-01
Abstract. We describe the principles of using orientation-independent differential interference contrast (OI-DIC) microscopy for mapping optical path length (OPL). Computation of the scalar two-dimensional OPL map is based on an experimentally received map of the OPL gradient vector field. Two methods of contrast enhancement for the OPL image, which reveal hardly visible structures and organelles, are presented. The results obtained can be used for reconstruction of a volume image. We have confirmed that a standard research grade light microscope equipped with the OI-DIC and 100×/1.3 NA objective lens, which was not specially selected for minimum wavefront and polarization aberrations, provides OPL noise level of ∼0.5 nm and lateral resolution if ∼300 nm at a wavelength of 546 nm. The new technology is the next step in the development of the DIC microscopy. It can replace standard DIC prisms on existing commercial microscope systems without modification. This will allow biological researchers that already have microscopy setups to expand the performance of their systems. PMID:28060991
Recovering the 3d Pose and Shape of Vehicles from Stereo Images
NASA Astrophysics Data System (ADS)
Coenen, M.; Rottensteiner, F.; Heipke, C.
2018-05-01
The precise reconstruction and pose estimation of vehicles plays an important role, e.g. for autonomous driving. We tackle this problem on the basis of street level stereo images obtained from a moving vehicle. Starting from initial vehicle detections, we use a deformable vehicle shape prior learned from CAD vehicle data to fully reconstruct the vehicles in 3D and to recover their 3D pose and shape. To fit a deformable vehicle model to each detection by inferring the optimal parameters for pose and shape, we define an energy function leveraging reconstructed 3D data, image information, the vehicle model and derived scene knowledge. To minimise the energy function, we apply a robust model fitting procedure based on iterative Monte Carlo model particle sampling. We evaluate our approach using the object detection and orientation estimation benchmark of the KITTI dataset (Geiger et al., 2012). Our approach can deal with very coarse pose initialisations and we achieve encouraging results with up to 82 % correct pose estimations. Moreover, we are able to deliver very precise orientation estimation results with an average absolute error smaller than 4°.
Nie, Haitao; Long, Kehui; Ma, Jun; Yue, Dan; Liu, Jinguo
2015-01-01
Partial occlusions, large pose variations, and extreme ambient illumination conditions generally cause the performance degradation of object recognition systems. Therefore, this paper presents a novel approach for fast and robust object recognition in cluttered scenes based on an improved scale invariant feature transform (SIFT) algorithm and a fuzzy closed-loop control method. First, a fast SIFT algorithm is proposed by classifying SIFT features into several clusters based on several attributes computed from the sub-orientation histogram (SOH), in the feature matching phase only features that share nearly the same corresponding attributes are compared. Second, a feature matching step is performed following a prioritized order based on the scale factor, which is calculated between the object image and the target object image, guaranteeing robust feature matching. Finally, a fuzzy closed-loop control strategy is applied to increase the accuracy of the object recognition and is essential for autonomous object manipulation process. Compared to the original SIFT algorithm for object recognition, the result of the proposed method shows that the number of SIFT features extracted from an object has a significant increase, and the computing speed of the object recognition processes increases by more than 40%. The experimental results confirmed that the proposed method performs effectively and accurately in cluttered scenes. PMID:25714094
Object-Oriented Programming in High Schools the Turing Way.
ERIC Educational Resources Information Center
Holt, Richard C.
This paper proposes an approach to introducing object-oriented concepts to high school computer science students using the Object-Oriented Turing (OOT) language. Students can learn about basic object-oriented (OO) principles such as classes and inheritance by using and expanding a collection of classes that draw pictures like circles and happy…
Erik Haunreiter; Zhanfeng Liu; Jeff Mai; Zachary Heath; Lisa Fischer
2008-01-01
Effective monitoring and identification of areas of hardwood mortality is a critical component in the management of sudden oak death (SOD). From 2001 to 2005, aerial surveys covering 13.5 million acres in California were conducted to map and monitor hardwood mortality for the early detection of Phytophthora ramorum, the pathogen responsible for SOD....
Extracting cardiac myofiber orientations from high frequency ultrasound images
NASA Astrophysics Data System (ADS)
Qin, Xulei; Cong, Zhibin; Jiang, Rong; Shen, Ming; Wagner, Mary B.; Kirshbom, Paul; Fei, Baowei
2013-03-01
Cardiac myofiber plays an important role in stress mechanism during heart beating periods. The orientation of myofibers decides the effects of the stress distribution and the whole heart deformation. It is important to image and quantitatively extract these orientations for understanding the cardiac physiological and pathological mechanism and for diagnosis of chronic diseases. Ultrasound has been wildly used in cardiac diagnosis because of its ability of performing dynamic and noninvasive imaging and because of its low cost. An extraction method is proposed to automatically detect the cardiac myofiber orientations from high frequency ultrasound images. First, heart walls containing myofibers are imaged by B-mode high frequency (<20 MHz) ultrasound imaging. Second, myofiber orientations are extracted from ultrasound images using the proposed method that combines a nonlinear anisotropic diffusion filter, Canny edge detector, Hough transform, and K-means clustering. This method is validated by the results of ultrasound data from phantoms and pig hearts.
Automatic segmentation of bones from digital hand radiographs
NASA Astrophysics Data System (ADS)
Liu, Brent J.; Taira, Ricky K.; Shim, Hyeonjoon; Keaton, Patricia
1995-05-01
The purpose of this paper is to develop a robust and accurate method that automatically segments phalangeal and epiphyseal bones from digital pediatric hand radiographs exhibiting various stages of growth. The algorithm uses an object-oriented approach comprising several stages beginning with the most general objects to be segmented, such as the outline of the hand from background, and proceeding in a succession of stages to the most specific object, such as a specific phalangeal bone from a digit of the hand. Each stage carries custom operators unique to the needs of that specific stage which will aid in more accurate results. The method is further aided by a knowledge base where all model contours and other information such as age, race, and sex, are stored. Shape models, 1-D wrist profiles, as well as an interpretation tree are used to map model and data contour segments. Shape analysis is performed using an arc-length orientation transform. The method is tested on close to 340 phalangeal and epiphyseal objects to be segmented from 17 cases of pediatric hand images obtained from our clinical PACS. Patient age ranges from 2 - 16 years. A pediatric radiologist preliminarily assessed the results of the object contours and were found to be accurate to within 95% for cases with non-fused bones and to within 85% for cases with fused bones. With accurate and robust results, the method can be applied toward areas such as the determination of bone age, the development of a normal hand atlas, and the characterization of many congenital and acquired growth diseases. Furthermore, this method's architecture can be applied to other image segmentation problems.
Object-Oriented Image Clustering Method Using UAS Photogrammetric Imagery
NASA Astrophysics Data System (ADS)
Lin, Y.; Larson, A.; Schultz-Fellenz, E. S.; Sussman, A. J.; Swanson, E.; Coppersmith, R.
2016-12-01
Unmanned Aerial Systems (UAS) have been used widely as an imaging modality to obtain remotely sensed multi-band surface imagery, and are growing in popularity due to their efficiency, ease of use, and affordability. Los Alamos National Laboratory (LANL) has employed the use of UAS for geologic site characterization and change detection studies at a variety of field sites. The deployed UAS equipped with a standard visible band camera to collect imagery datasets. Based on the imagery collected, we use deep sparse algorithmic processing to detect and discriminate subtle topographic features created or impacted by subsurface activities. In this work, we develop an object-oriented remote sensing imagery clustering method for land cover classification. To improve the clustering and segmentation accuracy, instead of using conventional pixel-based clustering methods, we integrate the spatial information from neighboring regions to create super-pixels to avoid salt-and-pepper noise and subsequent over-segmentation. To further improve robustness of our clustering method, we also incorporate a custom digital elevation model (DEM) dataset generated using a structure-from-motion (SfM) algorithm together with the red, green, and blue (RGB) band data for clustering. In particular, we first employ an agglomerative clustering to create an initial segmentation map, from where every object is treated as a single (new) pixel. Based on the new pixels obtained, we generate new features to implement another level of clustering. We employ our clustering method to the RGB+DEM datasets collected at the field site. Through binary clustering and multi-object clustering tests, we verify that our method can accurately separate vegetation from non-vegetation regions, and are also able to differentiate object features on the surface.
NASA Astrophysics Data System (ADS)
Preuss, R.
2014-12-01
This article discusses the current capabilities of automate processing of the image data on the example of using PhotoScan software by Agisoft. At present, image data obtained by various registration systems (metric and non - metric cameras) placed on airplanes, satellites, or more often on UAVs is used to create photogrammetric products. Multiple registrations of object or land area (large groups of photos are captured) are usually performed in order to eliminate obscured area as well as to raise the final accuracy of the photogrammetric product. Because of such a situation t he geometry of the resulting image blocks is far from the typical configuration of images. For fast images georeferencing automatic image matching algorithms are currently applied. They can create a model of a block in the local coordinate system or using initial exterior orientation and measured control points can provide image georeference in an external reference frame. In the case of non - metric image application, it is also possible to carry out self - calibration process at this stage. Image matching algorithm is also used in generation of dense point clouds reconstructing spatial shape of the object (area). In subsequent processing steps it is possible to obtain typical photogrammetric products such as orthomosaic, DSM or DTM and a photorealistic solid model of an object . All aforementioned processing steps are implemented in a single program in contrary to standard commercial software dividing all steps into dedicated modules. Image processing leading to final geo referenced products can be fully automated including sequential implementation of the processing steps at predetermined control parameters. The paper presents the practical results of the application fully automatic generation of othomosaic for both images obtained by a metric Vexell camera and a block of images acquired by a non - metric UAV system
Detecting Slums from Quick Bird Data in Pune Using AN Object Oriented Approach
NASA Astrophysics Data System (ADS)
Shekhar, S.
2012-07-01
We have been witnessing a gradual and steady transformation from a pre dominantly rural society to an urban society in India and by 2030, it will have more people living in urban than rural areas. Slums formed an integral part of Indian urbanisation as most of the Indian cities lack in basic needs of an acceptable life. Many efforts are being taken to improve their conditions. To carry out slum renewal programs and monitor its implementation, slum settlements should be recorded to obtain an adequate spatial data base. This can be only achieved through the analysis of remote sensing data with very high spatial resolution. Regarding the occurrences of settlement areas in the remote sensing data pixel-based approach on a high resolution image is unable to represent the heterogeneity of complex urban environments. Hence there is a need for sophisticated method and data for slum analysis. An attempt has been made to detect and discriminate the slums of Pune city by describing typical characteristics of these settlements, by using eCognition software from quick bird data on the basis of object oriented approach. Based on multi resolution segmentation, initial objects were created and further depend on texture, geometry and contextual characteristics of the image objects, they were classified into slums and non-slums. The developed rule base allowed the description of knowledge about phenomena clearly and easily using fuzzy membership functions and the described knowledge stored in the classification rule base led to the best classification with more than 80% accuracy.
Danescu, Radu; Ciurte, Anca; Turcu, Vlad
2014-02-11
The space around the Earth is filled with man-made objects, which orbit the planet at altitudes ranging from hundreds to tens of thousands of kilometers. Keeping an eye on all objects in Earth's orbit, useful and not useful, operational or not, is known as Space Surveillance. Due to cost considerations, the space surveillance solutions beyond the Low Earth Orbit region are mainly based on optical instruments. This paper presents a solution for real-time automatic detection and ranging of space objects of altitudes ranging from below the Medium Earth Orbit up to 40,000 km, based on two low cost observation systems built using commercial cameras and marginally professional telescopes, placed 37 km apart, operating as a large baseline stereovision system. The telescopes are pointed towards any visible region of the sky, and the system is able to automatically calibrate the orientation parameters using automatic matching of reference stars from an online catalog, with a very high tolerance for the initial guess of the sky region and camera orientation. The difference between the left and right image of a synchronized stereo pair is used for automatic detection of the satellite pixels, using an original difference computation algorithm that is capable of high sensitivity and a low false positive rate. The use of stereovision provides a strong means of removing false positives, and avoids the need for prior knowledge of the orbits observed, the system being able to detect at the same time all types of objects that fall within the measurement range and are visible on the image.
Gravity in the Brain as a Reference for Space and Time Perception.
Lacquaniti, Francesco; Bosco, Gianfranco; Gravano, Silvio; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Zago, Myrka
2015-01-01
Moving and interacting with the environment require a reference for orientation and a scale for calibration in space and time. There is a wide variety of environmental clues and calibrated frames at different locales, but the reference of gravity is ubiquitous on Earth. The pull of gravity on static objects provides a plummet which, together with the horizontal plane, defines a three-dimensional Cartesian frame for visual images. On the other hand, the gravitational acceleration of falling objects can provide a time-stamp on events, because the motion duration of an object accelerated by gravity over a given path is fixed. Indeed, since ancient times, man has been using plumb bobs for spatial surveying, and water clocks or pendulum clocks for time keeping. Here we review behavioral evidence in favor of the hypothesis that the brain is endowed with mechanisms that exploit the presence of gravity to estimate the spatial orientation and the passage of time. Several visual and non-visual (vestibular, haptic, visceral) cues are merged to estimate the orientation of the visual vertical. However, the relative weight of each cue is not fixed, but depends on the specific task. Next, we show that an internal model of the effects of gravity is combined with multisensory signals to time the interception of falling objects, to time the passage through spatial landmarks during virtual navigation, to assess the duration of a gravitational motion, and to judge the naturalness of periodic motion under gravity.
A GUI visualization system for airborne lidar image data to reconstruct 3D city model
NASA Astrophysics Data System (ADS)
Kawata, Yoshiyuki; Koizumi, Kohei
2015-10-01
A visualization toolbox system with graphical user interfaces (GUIs) was developed for the analysis of LiDAR point cloud data, as a compound object oriented widget application in IDL (Interractive Data Language). The main features in our system include file input and output abilities, data conversion capability from ascii formatted LiDAR point cloud data to LiDAR image data whose pixel value corresponds the altitude measured by LiDAR, visualization of 2D/3D images in various processing steps and automatic reconstruction ability of 3D city model. The performance and advantages of our graphical user interface (GUI) visualization system for LiDAR data are demonstrated.
Designing Image Analysis Pipelines in Light Microscopy: A Rational Approach.
Arganda-Carreras, Ignacio; Andrey, Philippe
2017-01-01
With the progress of microscopy techniques and the rapidly growing amounts of acquired imaging data, there is an increased need for automated image processing and analysis solutions in biological studies. Each new application requires the design of a specific image analysis pipeline, by assembling a series of image processing operations. Many commercial or free bioimage analysis software are now available and several textbooks and reviews have presented the mathematical and computational fundamentals of image processing and analysis. Tens, if not hundreds, of algorithms and methods have been developed and integrated into image analysis software, resulting in a combinatorial explosion of possible image processing sequences. This paper presents a general guideline methodology to rationally address the design of image processing and analysis pipelines. The originality of the proposed approach is to follow an iterative, backwards procedure from the target objectives of analysis. The proposed goal-oriented strategy should help biologists to better apprehend image analysis in the context of their research and should allow them to efficiently interact with image processing specialists.
Neural network face recognition using wavelets
NASA Astrophysics Data System (ADS)
Karunaratne, Passant V.; Jouny, Ismail I.
1997-04-01
The recognition of human faces is a phenomenon that has been mastered by the human visual system and that has been researched extensively in the domain of computer neural networks and image processing. This research is involved in the study of neural networks and wavelet image processing techniques in the application of human face recognition. The objective of the system is to acquire a digitized still image of a human face, carry out pre-processing on the image as required, an then, given a prior database of images of possible individuals, be able to recognize the individual in the image. The pre-processing segment of the system includes several procedures, namely image compression, denoising, and feature extraction. The image processing is carried out using Daubechies wavelets. Once the images have been passed through the wavelet-based image processor they can be efficiently analyzed by means of a neural network. A back- propagation neural network is used for the recognition segment of the system. The main constraints of the system is with regard to the characteristics of the images being processed. The system should be able to carry out effective recognition of the human faces irrespective of the individual's facial-expression, presence of extraneous objects such as head-gear or spectacles, and face/head orientation. A potential application of this face recognition system would be as a secondary verification method in an automated teller machine.
Towards a general object-oriented software development methodology
NASA Technical Reports Server (NTRS)
Seidewitz, ED; Stark, Mike
1986-01-01
An object is an abstract software model of a problem domain entity. Objects are packages of both data and operations of that data (Goldberg 83, Booch 83). The Ada (tm) package construct is representative of this general notion of an object. Object-oriented design is the technique of using objects as the basic unit of modularity in systems design. The Software Engineering Laboratory at the Goddard Space Flight Center is currently involved in a pilot program to develop a flight dynamics simulator in Ada (approximately 40,000 statements) using object-oriented methods. Several authors have applied object-oriented concepts to Ada (e.g., Booch 83, Cherry 85). It was found that these methodologies are limited. As a result a more general approach was synthesized with allows a designer to apply powerful object-oriented principles to a wide range of applications and at all stages of design. An overview is provided of this approach. Further, how object-oriented design fits into the overall software life-cycle is considered.
Digital holographic microscopy combined with optical tweezers
NASA Astrophysics Data System (ADS)
Cardenas, Nelson; Yu, Lingfeng; Mohanty, Samarendra K.
2011-02-01
While optical tweezers have been widely used for the manipulation and organization of microscopic objects in three dimensions, observing the manipulated objects along axial direction has been quite challenging. In order to visualize organization and orientation of objects along axial direction, we report development of a Digital holographic microscopy combined with optical tweezers. Digital holography is achieved by use of a modified Mach-Zehnder interferometer with digital recording of interference pattern of the reference and sample laser beams by use of a single CCD camera. In this method, quantitative phase information is retrieved dynamically with high temporal resolution, only limited by frame rate of the CCD. Digital focusing, phase-unwrapping as well as online analysis and display of the quantitative phase images was performed on a software developed on LabView platform. Since phase changes observed in DHOT is very sensitive to optical thickness of trapped volume, estimation of number of particles trapped in the axial direction as well as orientation of non-spherical objects could be achieved with high precision. Since in diseases such as malaria and diabetics, change in refractive index of red blood cells occurs, this system can be employed to map such disease-specific changes in biological samples upon immobilization with optical tweezers.
Portable Video/Digital Retinal Funduscope
NASA Technical Reports Server (NTRS)
Taylor, Gerald R.; Meehan, Richard; Hunter, Norwood; Caputo, Michael; Gibson, C. Robert
1991-01-01
Lightweight, inexpensive electronic and photographic instrument developed for detection, monitoring, and objective quantification of ocular/systemic disease or physiological alterations of retina, blood vessels, or other structures in anterior and posterior chambers of eye. Operated with little training. Functions with human or animal subject seated, recumbent, inverted, or in almost any other orientation; and in hospital, laboratory, field, or other environment. Produces video images viewed directly and/or digitized for simultaneous or subsequent analysis. Also equipped to produce photographs and/or fitted with adaptors to produce stereoscopic or magnified images of skin, nose, ear, throat, or mouth to detect lesions or diseases.
Infrared images of distant 3C radio galaxies
NASA Technical Reports Server (NTRS)
Eisenhardt, Peter; Chokshi, Arati
1990-01-01
J (1.2-micron) and K (2.2 micron) images have been obtained for eight 3CR radio galaxies with redshifts from 0.7 to 1.8. Most of the objects were known to have extended asymmetric optical continuum or line emission aligned with the radio lobe axis. In general, the IR morphologies of these galaxies are just as peculiar as their optical morphologies. For all the galaxies, when asymmetric structure is present in the optical, structure with the same orientation is seen in the IR and must be accounted for in any model to explain the alignment of optical and radio emission.
New data clustering for RBF classifier of agriculture products from x-ray images
NASA Astrophysics Data System (ADS)
Casasent, David P.; Chen, Xuewen
1999-08-01
Classification of real-time x-ray images of randomly oriented touching pistachio nuts is discussed. The ultimate objective is the development of a subsystem for automated non-invasive detection of defective product items on a conveyor belt. We discuss the use of clustering and how it is vital to achieve useful classification. New clustering methods using class identify and new cluster classes are advanced and shown to be of use for this application. Radial basis function neural net classifiers are emphasized. We expect our results to be of use for other classifiers and applications.
Mizrakhi, V M; Protsiuk, R G
2000-03-01
In profound impairement of vision the function of colour and seen objects perception is absent, with the person being unable to orient himself in space. The uncovered sensory sensations of colour allowed their use in training the blind in recognizing the colour of paper, fabric, etc. Further study in those having become blind will, we believe, help in finding eligible people and relevant approaches toward educating the blind, which will make for development of the trainee's ability to recognize images on the "inner visual screen".
NASA Astrophysics Data System (ADS)
Ghafaryasl, Babak; Baart, Robert; de Boer, Johannes F.; Vermeer, Koenraad A.; van Vliet, Lucas J.
2017-02-01
Optical coherence tomography (OCT) yields high-resolution, three-dimensional images of the retina. A better understanding of retinal nerve fiber bundle (RNFB) trajectories in combination with visual field data may be used for future diagnosis and monitoring of glaucoma. However, manual tracing of these bundles is a tedious task. In this work, we present an automatic technique to estimate the orientation of RNFBs from volumetric OCT scans. Our method consists of several steps, starting from automatic segmentation of the RNFL. Then, a stack of en face images around the posterior nerve fiber layer interface was extracted. The image showing the best visibility of RNFB trajectories was selected for further processing. After denoising the selected en face image, a semblance structure-oriented filter was applied to probe the strength of local linear structure in a discrete set of orientations creating an orientation space. Gaussian filtering along the orientation axis in this space is used to find the dominant orientation. Next, a confidence map was created to supplement the estimated orientation. This confidence map was used as pixel weight in normalized convolution to regularize the semblance filter response after which a new orientation estimate can be obtained. Finally, after several iterations an orientation field corresponding to the strongest local orientation was obtained. The RNFB orientations of six macular scans from three subjects were estimated. For all scans, visual inspection shows a good agreement between the estimated orientation fields and the RNFB trajectories in the en face images. Additionally, a good correlation between the orientation fields of two scans of the same subject was observed. Our method was also applied to a larger field of view around the macula. Manual tracing of the RNFB trajectories shows a good agreement with the automatically obtained streamlines obtained by fiber tracking.
NASA Astrophysics Data System (ADS)
Rönnholm, P.; Haggrén, H.
2012-07-01
Integration of laser scanning data and photographs is an excellent combination regarding both redundancy and complementary. Applications of integration vary from sensor and data calibration to advanced classification and scene understanding. In this research, only airborne laser scanning and aerial images are considered. Currently, the initial registration is solved using direct orientation sensors GPS and inertial measurements. However, the accuracy is not usually sufficient for reliable integration of data sets, and thus the initial registration needs to be improved. A registration of data from different sources requires searching and measuring of accurate tie features. Usually, points, lines or planes are preferred as tie features. Therefore, the majority of resent methods rely highly on artificial objects, such as buildings, targets or road paintings. However, in many areas no such objects are available. For example in forestry areas, it would be advantageous to be able to improve registration between laser data and images without making additional ground measurements. Therefore, there is a need to solve registration using only natural features, such as vegetation and ground surfaces. Using vegetation as tie features is challenging, because the shape and even location of vegetation can change because of wind, for example. The aim of this article was to compare registration accuracies derived by using either artificial or natural tie features. The test area included urban objects as well as trees and other vegetation. In this area, two registrations were performed, firstly, using mainly built objects and, secondly, using only vegetation and ground surface. The registrations were solved applying the interactive orientation method. As a result, using artificial tie features leaded to a successful registration in all directions of the coordinate system axes. In the case of using natural tie features, however, the detection of correct heights was difficult causing also some tilt errors. The planimetric registration was accurate.
The influence of grasping habits and object orientation on motor planning in children and adults.
Jovanovic, Bianca; Schwarzer, Gudrun
2017-12-01
We investigated the influence of habitual grasp strategies and object orientation on motor planning in 3-year-olds and 4- to 5-year-old children and adults. Participants were required to rotate different vertically oriented objects around 180°. Usually, adults perform this task by grasping objects with an awkward grip (thumb and index finger pointing downward) at the beginning of the movement, in order to finish it with a comfortable hand position. This pattern corresponds to the well-known end-state comfort effect (ESC) in grasp planning. The presented objects were associated with different habitual grasp orientations that either corresponded with the grasp direction required to reach end-state comfort (downward) or implied a contrary grasp orientation (upward). Additionally, they were presented either in their usual, canonical orientation (e.g., shovel with the blade oriented downward versus cup with its opening oriented upward) or upside down. As dependent variable we analyzed the number of grips conforming to the end-state comfort principle (ESC score) realized in each object type and orientation condition. The number of grips conforming to ESC strongly increased with age. In addition, the extent to which end-state comfort was considered was influenced by the actual orientation of the objects' functional parts. Thus, in all age-groups the ESC score was highest when the functional parts of the objects were oriented downward (shovel presented canonically with blade pointing downward, cup presented upside down) and corresponded to the hand orientation needed to realize ESC. © 2017 Wiley Periodicals, Inc.
Acoustic positioning and orientation prediction
NASA Technical Reports Server (NTRS)
Barmatz, Martin B. (Inventor); Aveni, Glenn (Inventor); Putterman, Seth (Inventor); Rudnick, Joseph (Inventor)
1990-01-01
A method is described for use with an acoustic positioner, which enables a determination of the equilibrium position and orientation which an object assumes in a zero gravity environment, as well as restoring forces and torques of an object in an acoustic standing wave field. An acoustic standing wave field is established in the chamber, and the object is held at several different positions near the expected equilibrium position. While the object is held at each position, the center resonant frequency of the chamber is determined, by noting which frequency results in the greatest pressure of the acoustic field. The object position which results in the lowest center resonant frequency is the equilibrium position. The orientation of a nonspherical object is similarly determined, by holding the object in a plurality of different orientations at its equilibrium position, and noting the center resonant frequency for each orientation. The orientation which results in the lowest center resonant frequency is the equilibrium orientation. Where the acoustic frequency is constant, but the chamber length is variable, the equilibrium position or orientation is that which results in the greatest chamber length at the center resonant frequency.
The effect of implied orientation derived from verbal context on picture recognition.
Stanfield, R A; Zwaan, R A
2001-03-01
Perceptual symbol systems assume an analogue relationship between a symbol and its referent, whereas amodal symbol systems assume an arbitrary relationship between a symbol and its referent. According to perceptual symbol theories, the complete representation of an object, called a simulation, should reflect physical characteristics of the object. Amodal theories, in contrast, do not make this prediction. We tested the hypothesis, derived from perceptual symbol theories, that people mentally represent the orientation of an object implied by a verbal description. Orientation (vertical-horizontal) was manipulated by having participants read a sentence that implicitly suggested a particular orientation for an object. Then recognition latencies to pictures of the object in each of the two orientations were measured. Pictures matching the orientation of the object implied by the sentence were responded to faster than pictures that did not match the orientation. This finding is interpreted as offering support for theories positing perceptual symbol systems.
Vegetation Monitoring of Mashhad Using AN Object-Oriented POST Classification Comparison Method
NASA Astrophysics Data System (ADS)
Khalili Moghadam, N.; Delavar, M. R.; Forati, A.
2017-09-01
By and large, todays mega cities are confronting considerable urban development in which many new buildings are being constructed in fringe areas of these cities. This remarkable urban development will probably end in vegetation reduction even though each mega city requires adequate areas of vegetation, which is considered to be crucial and helpful for these cities from a wide variety of perspectives such as air pollution reduction, soil erosion prevention, and eco system as well as environmental protection. One of the optimum methods for monitoring this vital component of each city is multi-temporal satellite images acquisition and using change detection techniques. In this research, the vegetation and urban changes of Mashhad, Iran, were monitored using an object-oriented (marker-based watershed algorithm) post classification comparison (PCC) method. A Bi-temporal multi-spectral Landsat satellite image was used from the study area to detect the changes of urban and vegetation areas and to find a relation between these changes. The results of this research demonstrate that during 1987-2017, Mashhad urban area has increased about 22525 hectares and the vegetation area has decreased approximately 4903 hectares. These statistics substantiate the close relationship between urban development and vegetation reduction. Moreover, the overall accuracies of 85.5% and 91.2% were achieved for the first and the second image classification, respectively. In addition, the overall accuracy and kappa coefficient of change detection were assessed 84.1% and 70.3%, respectively.
Improving automated 3D reconstruction methods via vision metrology
NASA Astrophysics Data System (ADS)
Toschi, Isabella; Nocerino, Erica; Hess, Mona; Menna, Fabio; Sargeant, Ben; MacDonald, Lindsay; Remondino, Fabio; Robson, Stuart
2015-05-01
This paper aims to provide a procedure for improving automated 3D reconstruction methods via vision metrology. The 3D reconstruction problem is generally addressed using two different approaches. On the one hand, vision metrology (VM) systems try to accurately derive 3D coordinates of few sparse object points for industrial measurement and inspection applications; on the other, recent dense image matching (DIM) algorithms are designed to produce dense point clouds for surface representations and analyses. This paper strives to demonstrate a step towards narrowing the gap between traditional VM and DIM approaches. Efforts are therefore intended to (i) test the metric performance of the automated photogrammetric 3D reconstruction procedure, (ii) enhance the accuracy of the final results and (iii) obtain statistical indicators of the quality achieved in the orientation step. VM tools are exploited to integrate their main functionalities (centroid measurement, photogrammetric network adjustment, precision assessment, etc.) into the pipeline of 3D dense reconstruction. Finally, geometric analyses and accuracy evaluations are performed on the raw output of the matching (i.e. the point clouds) by adopting a metrological approach. The latter is based on the use of known geometric shapes and quality parameters derived from VDI/VDE guidelines. Tests are carried out by imaging the calibrated Portable Metric Test Object, designed and built at University College London (UCL), UK. It allows assessment of the performance of the image orientation and matching procedures within a typical industrial scenario, characterised by poor texture and known 3D/2D shapes.
NASA Astrophysics Data System (ADS)
Rieke-Zapp, D.; Tecklenburg, W.; Peipe, J.; Hastedt, H.; Haig, Claudia
Recent tests on the geometric stability of several digital cameras that were not designed for photogrammetric applications have shown that the accomplished accuracies in object space are either limited or that the accuracy potential is not exploited to the fullest extent. A total of 72 calibrations were calculated with four different software products for eleven digital camera models with different hardware setups, some with mechanical fixation of one or more parts. The calibration procedure was chosen in accord to a German guideline for evaluation of optical 3D measuring systems [VDI/VDE, VDI/VDE 2634 Part 1, 2002. Optical 3D Measuring Systems-Imaging Systems with Point-by-point Probing. Beuth Verlag, Berlin]. All images were taken with ringflashes which was considered a standard method for close-range photogrammetry. In cases where the flash was mounted to the lens, the force exerted on the lens tube and the camera mount greatly reduced the accomplished accuracy. Mounting the ringflash to the camera instead resulted in a large improvement of accuracy in object space. For standard calibration best accuracies in object space were accomplished with a Canon EOS 5D and a 35 mm Canon lens where the focusing tube was fixed with epoxy (47 μm maximum absolute length measurement error in object space). The fixation of the Canon lens was fairly easy and inexpensive resulting in a sevenfold increase in accuracy compared with the same lens type without modification. A similar accuracy was accomplished with a Nikon D3 when mounting the ringflash to the camera instead of the lens (52 μm maximum absolute length measurement error in object space). Parameterisation of geometric instabilities by introduction of an image variant interior orientation in the calibration process improved results for most cameras. In this case, a modified Alpa 12 WA yielded the best results (29 μm maximum absolute length measurement error in object space). Extending the parameter model with FiBun software to model not only an image variant interior orientation, but also deformations in the sensor domain of the cameras, showed significant improvements only for a small group of cameras. The Nikon D3 camera yielded the best overall accuracy (25 μm maximum absolute length measurement error in object space) with this calibration procedure indicating at the same time the presence of image invariant error in the sensor domain. Overall, calibration results showed that digital cameras can be applied for an accurate photogrammetric survey and that only a little effort was sufficient to greatly improve the accuracy potential of digital cameras.
ERIC Educational Resources Information Center
Shin, Shin-Shing
2015-01-01
Students in object-oriented analysis and design (OOAD) courses typically encounter difficulties transitioning from object-oriented analysis (OOA) to logical design (OOLD). This study conducted an empirical experiment to examine these learning difficulties by evaluating differences between OOA-to-OOLD and OOLD-to-object-oriented-physical-design…
ERIC Educational Resources Information Center
Thompson, Errol; Kinshuk
2011-01-01
Object-oriented programming is seen as a difficult skill to master. There is considerable debate about the most appropriate way to introduce novice programmers to object-oriented concepts. Is it possible to uncover what the critical aspects or features are that enhance the learning of object-oriented programming? Practitioners have differing…
A Method of Face Detection with Bayesian Probability
NASA Astrophysics Data System (ADS)
Sarker, Goutam
2010-10-01
The objective of face detection is to identify all images which contain a face, irrespective of its orientation, illumination conditions etc. This is a hard problem, because the faces are highly variable in size, shape lighting conditions etc. Many methods have been designed and developed to detect faces in a single image. The present paper is based on one `Appearance Based Method' which relies on learning the facial and non facial features from image examples. This in its turn is based on statistical analysis of examples and counter examples of facial images and employs Bayesian Conditional Classification Rule to detect the probability of belongingness of a face (or non-face) within an image frame. The detection rate of the present system is very high and thereby the number of false positive and false negative detection is substantially low.
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
1990-01-01
All vision systems, both human and machine, transform the spatial image into a coded representation. Particular codes may be optimized for efficiency or to extract useful image features. Researchers explored image codes based on primary visual cortex in man and other primates. Understanding these codes will advance the art in image coding, autonomous vision, and computational human factors. In cortex, imagery is coded by features that vary in size, orientation, and position. Researchers have devised a mathematical model of this transformation, called the Hexagonal oriented Orthogonal quadrature Pyramid (HOP). In a pyramid code, features are segregated by size into layers, with fewer features in the layers devoted to large features. Pyramid schemes provide scale invariance, and are useful for coarse-to-fine searching and for progressive transmission of images. The HOP Pyramid is novel in three respects: (1) it uses a hexagonal pixel lattice, (2) it uses oriented features, and (3) it accurately models most of the prominent aspects of primary visual cortex. The transform uses seven basic features (kernels), which may be regarded as three oriented edges, three oriented bars, and one non-oriented blob. Application of these kernels to non-overlapping seven-pixel neighborhoods yields six oriented, high-pass pyramid layers, and one low-pass (blob) layer.
In vivo and ex vivo imaging with ultrahigh resolution full-field OCT
NASA Astrophysics Data System (ADS)
Grieve, Kate; Moneron, Gael; Schwartz, Wilfrid; Boccara, Albert C.; Dubois, Arnaud
2005-08-01
Imaging of in vivo and ex vivo biological samples using full-field optical coherence tomography is demonstrated. Three variations on the original full-field optical coherence tomography instrument are presented, and evaluated in terms of performance. The instruments are based on the Linnik interferometer illuminated by a white light source. Images in the en face orientation are obtained in real-time without scanning by using a two-dimensional parallel detector array. An isotropic resolution capability better than 1 μm is achieved thanks to the use of a broad spectrum source and high numerical aperture microscope objectives. Detection sensitivity up to 90 dB is demonstrated. Image acquisition times as short as 10 μs per en face image are possible. A variety of in vivo and ex vivo imaging applications is explored, particularly in the fields of embryology, ophthalmology and botany.
Application of unscented Kalman filter for robust pose estimation in image-guided surgery
NASA Astrophysics Data System (ADS)
Vaccarella, Alberto; De Momi, Elena; Valenti, Marta; Ferrigno, Giancarlo; Enquobahrie, Andinet
2012-02-01
Image-guided surgery (IGS) allows clinicians to view current, intra-operative scenes superimposed on preoperative images (typically MRI or CT scans). IGS systems use localization systems to track and visualize surgical tools overlaid on top of preoperative images of the patient during surgery. The most commonly used localization systems in the Operating Rooms (OR) are optical tracking systems (OTS) due to their ease of use and cost effectiveness. However, OTS' suffer from the major drawback of line-of-sight requirements. State space approaches based on different implementations of the Kalman filter have recently been investigated in order to compensate for short line-of-sight occlusion. However, the proposed parameterizations for the rigid body orientation suffer from singularities at certain values of rotation angles. The purpose of this work is to develop a quaternion-based Unscented Kalman Filter (UKF) for robust optical tracking of both position and orientation of surgical tools in order to compensate marker occlusion issues. This paper presents preliminary results towards a Kalman-based Sensor Management Engine (SME). The engine will filter and fuse multimodal tracking streams of data. This work was motivated by our experience working in robot-based applications for keyhole neurosurgery (ROBOCAST project). The algorithm was evaluated using real data from NDI Polaris tracker. The results show that our estimation technique is able to compensate for marker occlusion with a maximum error of 2.5° for orientation and 2.36 mm for position. The proposed approach will be useful in over-crowded state-of-the-art ORs where achieving continuous visibility of all tracked objects will be difficult.
Colour analysis and verification of CCTV images under different lighting conditions
NASA Astrophysics Data System (ADS)
Smith, R. A.; MacLennan-Brown, K.; Tighe, J. F.; Cohen, N.; Triantaphillidou, S.; MacDonald, L. W.
2008-01-01
Colour information is not faithfully maintained by a CCTV imaging chain. Since colour can play an important role in identifying objects it is beneficial to be able to account accurately for changes to colour introduced by components in the chain. With this information it will be possible for law enforcement agencies and others to work back along the imaging chain to extract accurate colour information from CCTV recordings. A typical CCTV system has an imaging chain that may consist of scene, camera, compression, recording media and display. The response of each of these stages to colour scene information was characterised by measuring its response to a known input. The main variables that affect colour within a scene are illumination and the colour, orientation and texture of objects. The effects of illumination on the appearance of colour of a variety of test targets were tested using laboratory-based lighting, street lighting, car headlights and artificial daylight. A range of typical cameras used in CCTV applications, common compression schemes and representative displays were also characterised.
Fusion of light-field and photogrammetric surface form data
NASA Astrophysics Data System (ADS)
Sims-Waterhouse, Danny; Piano, Samanta; Leach, Richard K.
2017-08-01
Photogrammetry based systems are able to produce 3D reconstructions of an object given a set of images taken from different orientations. In this paper, we implement a light-field camera within a photogrammetry system in order to capture additional depth information, as well as the photogrammetric point cloud. Compared to a traditional camera that only captures the intensity of the incident light, a light-field camera also provides angular information for each pixel. In principle, this additional information allows 2D images to be reconstructed at a given focal plane, and hence a depth map can be computed. Through the fusion of light-field and photogrammetric data, we show that it is possible to improve the measurement uncertainty of a millimetre scale 3D object, compared to that from the individual systems. By imaging a series of test artefacts from various positions, individual point clouds were produced from depth-map information and triangulation of corresponding features between images. Using both measurements, data fusion methods were implemented in order to provide a single point cloud with reduced measurement uncertainty.
Unified modeling language and design of a case-based retrieval system in medical imaging.
LeBozec, C.; Jaulent, M. C.; Zapletal, E.; Degoulet, P.
1998-01-01
One goal of artificial intelligence research into case-based reasoning (CBR) systems is to develop approaches for designing useful and practical interactive case-based environments. Explaining each step of the design of the case-base and of the retrieval process is critical for the application of case-based systems to the real world. We describe herein our approach to the design of IDEM--Images and Diagnosis from Examples in Medicine--a medical image case-based retrieval system for pathologists. Our approach is based on the expressiveness of an object-oriented modeling language standard: the Unified Modeling Language (UML). We created a set of diagrams in UML notation illustrating the steps of the CBR methodology we used. The key aspect of this approach was selecting the relevant objects of the system according to user requirements and making visualization of cases and of the components of the case retrieval process. Further evaluation of the expressiveness of the design document is required but UML seems to be a promising formalism, improving the communication between the developers and users. Images Figure 6 Figure 7 PMID:9929346
Unified Framework for Development, Deployment and Robust Testing of Neuroimaging Algorithms
Joshi, Alark; Scheinost, Dustin; Okuda, Hirohito; Belhachemi, Dominique; Murphy, Isabella; Staib, Lawrence H.; Papademetris, Xenophon
2011-01-01
Developing both graphical and command-line user interfaces for neuroimaging algorithms requires considerable effort. Neuroimaging algorithms can meet their potential only if they can be easily and frequently used by their intended users. Deployment of a large suite of such algorithms on multiple platforms requires consistency of user interface controls, consistent results across various platforms and thorough testing. We present the design and implementation of a novel object-oriented framework that allows for rapid development of complex image analysis algorithms with many reusable components and the ability to easily add graphical user interface controls. Our framework also allows for simplified yet robust nightly testing of the algorithms to ensure stability and cross platform interoperability. All of the functionality is encapsulated into a software object requiring no separate source code for user interfaces, testing or deployment. This formulation makes our framework ideal for developing novel, stable and easy-to-use algorithms for medical image analysis and computer assisted interventions. The framework has been both deployed at Yale and released for public use in the open source multi-platform image analysis software—BioImage Suite (bioimagesuite.org). PMID:21249532
NASA Technical Reports Server (NTRS)
Hjellming, R. M.
1992-01-01
AIPS++ is an Astronomical Information Processing System being designed and implemented by an international consortium of NRAO and six other radio astronomy institutions in Australia, India, the Netherlands, the United Kingdom, Canada, and the USA. AIPS++ is intended to replace the functionality of AIPS, to be more easily programmable, and will be implemented in C++ using object-oriented techniques. Programmability in AIPS++ is planned at three levels. The first level will be that of a command-line interpreter with characteristics similar to IDL and PV-Wave, but with an intensive set of operations appropriate to telescope data handling, image formation, and image processing. The third level will be in C++ with extensive use of class libraries for both basic operations and advanced applications. The third level will allow input and output of data between external FORTRAN programs and AIPS++ telescope and image databases. In addition to summarizing the above programmability characteristics, this talk will given an overview of the classes currently being designed for telescope data calibration and editing, image formation, and the 'toolkit' of mathematical 'objects' that will perform most of the processing in AIPS++.
Design and test of an object-oriented GIS to map plant species in the Southern Rockies
NASA Technical Reports Server (NTRS)
Morain, Stanley A.; Neville, Paul R. H.; Budge, Thomas K.; Morrison, Susan C.; Helfrich, Donald A.; Fruit, Sarah
1993-01-01
Elevational and latitudinal shifts occur in the flora of the Rocky Mountains due to long term climate change. In order to specify which species are successfully migrating with these changes, and which are not, an object-oriented, image-based geographic information system (GIS) is being created to animate evolving ecological regimes of temperature and precipitation. Research at the Earth Data Analysis Center (EDAC) is developing a landscape model that includes the spatial, spectral and temporal domains. It is designed to visualize migratory changes in the Rocky Mountain flora, and to specify future community compositions. The object-oriented database will eventually tag each of the nearly 6000 species with a unique hue, intensity, and saturation value, so their movements can be individually traced. An associated GIS includes environmental parameters that control the distribution of each species in the landscape, and satellite imagery is used to help visualize the terrain. Polygons for the GIS are delineated as landform facets that are static in ecological time. The model manages these facets as a triangular irregular net (TIN), and their analysis assesses the gradual progression of species as they migrate through the TIN. Using an appropriate climate change model, the goal will be to stop the modeling process to assess both the rate and direction of species' change and to specify the changing community composition of each landscape facet.
NASA Astrophysics Data System (ADS)
House, Rachael; Lasso, Andras; Harish, Vinyas; Baum, Zachary; Fichtinger, Gabor
2017-03-01
PURPOSE: Optical pose tracking of medical instruments is often used in image-guided interventions. Unfortunately, compared to commonly used computing devices, optical trackers tend to be large, heavy, and expensive devices. Compact 3D vision systems, such as Intel RealSense cameras can capture 3D pose information at several magnitudes lower cost, size, and weight. We propose to use Intel SR300 device for applications where it is not practical or feasible to use conventional trackers and limited range and tracking accuracy is acceptable. We also put forward a vertebral level localization application utilizing the SR300 to reduce risk of wrong-level surgery. METHODS: The SR300 was utilized as an object tracker by extending the PLUS toolkit to support data collection from RealSense cameras. Accuracy of the camera was tested by comparing to a high-accuracy optical tracker. CT images of a lumbar spine phantom were obtained and used to create a 3D model in 3D Slicer. The SR300 was used to obtain a surface model of the phantom. Markers were attached to the phantom and a pointer and tracked using Intel RealSense SDK's built-in object tracking feature. 3D Slicer was used to align CT image with phantom using landmark registration and display the CT image overlaid on the optical image. RESULTS: Accuracy of the camera yielded a median position error of 3.3mm (95th percentile 6.7mm) and orientation error of 1.6° (95th percentile 4.3°) in a 20x16x10cm workspace, constantly maintaining proper marker orientation. The model and surface correctly aligned demonstrating the vertebral level localization application. CONCLUSION: The SR300 may be usable for pose tracking in medical procedures where limited accuracy is acceptable. Initial results suggest the SR300 is suitable for vertebral level localization.
ERIC Educational Resources Information Center
Lutke, Nikolay; Lange-Kuttner, Christiane
2015-01-01
This study introduces the new Rotated Colour Cube Test (RCCT) as a measure of object identification and mental rotation using single 3D colour cube images in a matching-to-sample procedure. One hundred 7- to 11-year-old children were tested with aligned or rotated cube models, distracters and targets. While different orientations of distracters…
Visual Processing of Object Velocity and Acceleration
1991-12-13
more recently, Dr. Grzywacz’s applications of filtering models to the psychophysics of speed discrimination; 3) the McKee-Welch studies on the...population of spatio-temporally oriented filters to encode velocity. Dr. Grzywacz has attempted to reconcile his model with a variety of psychophysical...by many authors.23 In these models , the image is tectors have different sizes and spatial positions, but they all spatially and temporally filtered
Object-oriented knowledge representation for expert systems
NASA Technical Reports Server (NTRS)
Scott, Stephen L.
1991-01-01
Object oriented techniques have generated considerable interest in the Artificial Intelligence (AI) community in recent years. This paper discusses an approach for representing expert system knowledge using classes, objects, and message passing. The implementation is in version 4.3 of NASA's C Language Integrated Production System (CLIPS), an expert system tool that does not provide direct support for object oriented design. The method uses programmer imposed conventions and keywords to structure facts, and rules to provide object oriented capabilities.
SU-E-J-15: Automatically Detect Patient Treatment Position and Orientation in KV Portal Images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiu, J; Yang, D
2015-06-15
Purpose: In the course of radiation therapy, the complex information processing workflow will Result in potential errors, such as incorrect or inaccurate patient setups. With automatic image check and patient identification, such errors could be effectively reduced. For this purpose, we developed a simple and rapid image processing method, to automatically detect the patient position and orientation in 2D portal images, so to allow automatic check of positions and orientations for patient daily RT treatments. Methods: Based on the principle of portal image formation, a set of whole body DRR images were reconstructed from multiple whole body CT volume datasets,more » and fused together to be used as the matching template. To identify the patient setup position and orientation shown in a 2D portal image, the 2D portal image was preprocessed (contrast enhancement, down-sampling and couch table detection), then matched to the template image so to identify the laterality (left or right), position, orientation and treatment site. Results: Five day’s clinical qualified portal images were gathered randomly, then were processed by the automatic detection and matching method without any additional information. The detection results were visually checked by physicists. 182 images were correct detection in a total of 200kV portal images. The correct rate was 91%. Conclusion: The proposed method can detect patient setup and orientation quickly and automatically. It only requires the image intensity information in KV portal images. This method can be useful in the framework of Electronic Chart Check (ECCK) to reduce the potential errors in workflow of radiation therapy and so to improve patient safety. In addition, the auto-detection results, as the patient treatment site position and patient orientation, could be useful to guide the sequential image processing procedures, e.g. verification of patient daily setup accuracy. This work was partially supported by research grant from Varian Medical System.« less
Neumann, M; Breton, E; Cuvillon, L; Pan, L; Lorenz, C H; de Mathelin, M
2012-01-01
In this paper, an original workflow is presented for MR image plane alignment based on tracking in real-time MR images. A test device consisting of two resonant micro-coils and a passive marker is proposed for detection using image-based algorithms. Micro-coils allow for automated initialization of the object detection in dedicated low flip angle projection images; then the passive marker is tracked in clinical real-time MR images, with alternation between two oblique orthogonal image planes along the test device axis; in case the passive marker is lost in real-time images, the workflow is reinitialized. The proposed workflow was designed to minimize dedicated acquisition time to a single dedicated acquisition in the ideal case (no reinitialization required). First experiments have shown promising results for test-device tracking precision, with a mean position error of 0.79 mm and a mean orientation error of 0.24°.
NASA Technical Reports Server (NTRS)
Eiroa, C.; Hodapp, K.-W.
1989-01-01
High-resolution near-infrared images and ice-band spectra of the protoplanetary nebula M1-92 (Minkowski's Footprint) are presented. The direct images of the object display a typical bipolar morphology with the star located in the center of the nebula illuminating two lobes. The overall dimensions are the same in the J, H, and K infrared bands, and they are similar to those in the optical range. The near-infrared color images clearly reveal a dust torus around the central star. The orientation of the object in the plane of the sky allows the simultaneous view of the illuminating star, the nebular lobes, and the dust torus in a highly favorable perspective, only rarely found in other bipolar nebulae. The ice-band spectra make it possible to locate the H2O-ice grains within the dust torus; in addition, the narrow ice feature indicates that the ices are primarily pure crystalline water.
Two-Dimensional Nonlinear Finite Element Analysis of CMC Microstructures
NASA Technical Reports Server (NTRS)
Mital, Subodh K.; Goldberg, Robert K.; Bonacuse, Peter J.
2011-01-01
Detailed two-dimensional finite element analyses of the cross-sections of a model CVI (chemical vapor infiltrated) SiC/SiC (silicon carbide fiber in a silicon carbide matrix) ceramic matrix composites are performed. High resolution images of the cross-section of this composite material are generated using serial sectioning of the test specimens. These images are then used to develop very detailed finite element models of the cross-sections using the public domain software OOF2 (Object Oriented Analysis of Material Microstructures). Examination of these images shows that these microstructures have significant variability and irregularity. How these variabilities manifest themselves in the variability in effective properties as well as the stress distribution, damage initiation and damage progression is the overall objective of this work. Results indicate that even though the macroscopic stress-strain behavior of various sections analyzed is very similar, each section has a very distinct damage pattern when subjected to in-plane tensile loads and this damage pattern seems to follow the unique architectural and microstructural details of the analyzed sections.
The development of a learning management system for dental radiology education: A technical report.
Chang, Hee-Jin; Symkhampha, Khanthaly; Huh, Kyung-Hoe; Yi, Won-Jin; Heo, Min-Suk; Lee, Sam-Sun; Choi, Soon-Chul
2017-03-01
This study was conducted to suggest the development of a learning management system for dental radiology education using the Modular Object-Oriented Dynamic Learning Environment (Moodle). Moodle is a well-known and verified open-source software-learning management system (OSS-LMS). The Moodle software was installed on a server computer and customized for dental radiology education. The system was implemented for teaching undergraduate students to diagnose dental caries in panoramic images. Questions were chosen that could assess students' diagnosis ability. Students were given several questions corre-sponding to each of 100 panoramic images. The installation and customization of Moodle was feasible, cost-effective, and time-saving. By having students answer questions repeatedly, it was possible to train them to examine panoramic images sequentially and thoroughly. Based on its educational efficiency and efficacy, the adaptation of an OSS-LMS in dental school may be highly recommended. The system could be extended to continuing education for dentists. Further studies on the objective evaluation of knowledge acquisition and retention are needed.
Change detection from remotely sensed images: From pixel-based to object-based approaches
NASA Astrophysics Data System (ADS)
Hussain, Masroor; Chen, Dongmei; Cheng, Angela; Wei, Hui; Stanley, David
2013-06-01
The appetite for up-to-date information about earth's surface is ever increasing, as such information provides a base for a large number of applications, including local, regional and global resources monitoring, land-cover and land-use change monitoring, and environmental studies. The data from remote sensing satellites provide opportunities to acquire information about land at varying resolutions and has been widely used for change detection studies. A large number of change detection methodologies and techniques, utilizing remotely sensed data, have been developed, and newer techniques are still emerging. This paper begins with a discussion of the traditionally pixel-based and (mostly) statistics-oriented change detection techniques which focus mainly on the spectral values and mostly ignore the spatial context. This is succeeded by a review of object-based change detection techniques. Finally there is a brief discussion of spatial data mining techniques in image processing and change detection from remote sensing data. The merits and issues of different techniques are compared. The importance of the exponential increase in the image data volume and multiple sensors and associated challenges on the development of change detection techniques are highlighted. With the wide use of very-high-resolution (VHR) remotely sensed images, object-based methods and data mining techniques may have more potential in change detection.
Interactive tele-radiological segmentation systems for treatment and diagnosis.
Zimeras, S; Gortzis, L G
2012-01-01
Telehealth is the exchange of health information and the provision of health care services through electronic information and communications technology, where participants are separated by geographic, time, social and cultural barriers. The shift of telemedicine from desktop platforms to wireless and mobile technologies is likely to have a significant impact on healthcare in the future. It is therefore crucial to develop a general information exchange e-medical system to enables its users to perform online and offline medical consultations through diagnosis. During the medical diagnosis, image analysis techniques combined with doctor's opinions could be useful for final medical decisions. Quantitative analysis of digital images requires detection and segmentation of the borders of the object of interest. In medical images, segmentation has traditionally been done by human experts. Even with the aid of image processing software (computer-assisted segmentation tools), manual segmentation of 2D and 3D CT images is tedious, time-consuming, and thus impractical, especially in cases where a large number of objects must be specified. Substantial computational and storage requirements become especially acute when object orientation and scale have to be considered. Therefore automated or semi-automated segmentation techniques are essential if these software applications are ever to gain widespread clinical use. The main purpose of this work is to analyze segmentation techniques for the definition of anatomical structures under telemedical systems.
Object-oriented numerical computing C++
NASA Technical Reports Server (NTRS)
Vanrosendale, John
1994-01-01
An object oriented language is one allowing users to create a set of related types and then intermix and manipulate values of these related types. This paper discusses object oriented numerical computing using C++.
NASA Astrophysics Data System (ADS)
Chu, Chien-Hsun; Chiang, Kai-Wei
2016-06-01
The early development of mobile mapping system (MMS) was restricted to applications that permitted the determination of the elements of exterior orientation from existing ground control. Mobile mapping refers to a means of collecting geospatial data using mapping sensors that are mounted on a mobile platform. Research works concerning mobile mapping dates back to the late 1980s. This process is mainly driven by the need for highway infrastructure mapping and transportation corridor inventories. In the early nineties, advances in satellite and inertial technology made it possible to think about mobile mapping in a different way. Instead of using ground control points as references for orienting the images in space, the trajectory and attitude of the imager platform could now be determined directly. Cameras, along with navigation and positioning sensors are integrated and mounted on a land vehicle for mapping purposes. Objects of interest can be directly measured and mapped from images that have been georeferenced using navigation and positioning sensors. Direct georeferencing (DG) is the determination of time-variable position and orientation parameters for a mobile digital imager. The most common technologies used for this purpose today are satellite positioning using the Global Navigation Satellite System (GNSS) and inertial navigation using an Inertial Measuring Unit (IMU). Although either technology used along could in principle determine both position and orientation, they are usually integrated in such a way that the IMU is the main orientation sensor, while the GNSS receiver is the main position sensor. However, GNSS signals are obstructed due to limited number of visible satellites in GNSS denied environments such as urban canyon, foliage, tunnel and indoor that cause the GNSS gap or interfered by reflected signals that cause abnormal measurement residuals thus deteriorates the positioning accuracy in GNSS denied environments. This study aims at developing a novel method that uses ground control points to maintain the positioning accuracy of the MMS in GNSS denied environments. At last, this study analyses the performance of proposed method using about 20 check-points through DG process.
ERIC Educational Resources Information Center
Lobo, Michele A.; Galloway, James C.
2008-01-01
The effects of 3 weeks of social (control), postural, or object-oriented experiences on 9- to 21-week-old infants' (N = 42) reaching, exploration, and means-end behaviors were assessed. Coders recorded object contacts, mouthing, fingering, attention, and affect from video. Postural and object-oriented experiences advanced reaching, haptic…
Object-oriented programming with mixins in Ada
NASA Technical Reports Server (NTRS)
Seidewitz, ED
1992-01-01
Recently, I wrote a paper discussing the lack of 'true' object-oriented programming language features in Ada 83, why one might desire them in Ada, and how they might be added in Ada 9X. The approach I took in this paper was to build the new object-oriented features of Ada 9X as much as possible on the basic constructs and philosophy of Ada 83. The object-oriented features proposed for Ada 9X, while different in detail, are based on the same kind of approach. Further consideration of this approach led me on a long reflection on the nature of object-oriented programming and its application to Ada. The results of this reflection, presented in this paper, show how a fairly natural object-oriented style can indeed be developed even in Ada 83. The exercise of developing this style is useful for at least three reasons: (1) it provides a useful style for programming object-oriented applications in Ada 83 until new features become available with Ada 9X; (2) it demystifies many of the mechanisms that seem to be 'magic' in most object-oriented programming languages by making them explicit; and (3) it points out areas that are and are not in need of change in Ada 83 to make object-oriented programming more natural in Ada 9X. In the next four sections I will address in turn the issues of object-oriented classes, mixins, self-reference and supertyping. The presentation is through a sequence of examples. This results in some overlap with that paper, but all the examples in the present paper are written entirely in Ada 83. I will return to considerations for Ada 9X in the last section of the paper.
Towards an Object-Oriented Model for the Design and Development of Learning Objects
ERIC Educational Resources Information Center
Chrysostomou, Chrysostomos; Papadopoulos, George
2008-01-01
This work introduces the concept of an Object-Oriented Learning Object (OOLO) that is developed in a manner similar to the one that software objects are developed through Object-Oriented Software Engineering (OO SWE) techniques. In order to make the application of the OOLO feasible and efficient, an OOLO model needs to be developed based on…
Ardeshiri, Ramtin; Mulcahy, Ben; Zhen, Mei; Rezai, Pouya
2016-01-01
C. elegans is a well-known model organism in biology and neuroscience with a simple cellular (959 cells) and nervous (302 neurons) system and a relatively homologous (40%) genome to humans. Lateral and longitudinal manipulation of C. elegans to a favorable orientation is important in many applications such as neural and cellular imaging, laser ablation, microinjection, and electrophysiology. In this paper, we describe a micro-electro-fluidic device for on-demand manipulation of C. elegans and demonstrate its application in imaging of organs and neurons that cannot be visualized efficiently under natural orientation. To achieve this, we have used the electrotaxis technique to longitudinally orient the worm in a microchannel and then insert it into an orientation and imaging channel in which we integrated a rotatable glass capillary for orientation of the worm in any desired direction. The success rates of longitudinal and lateral orientations were 76% and 100%, respectively. We have demonstrated the application of our device in optical and fluorescent imaging of vulva, uterine-vulval cell (uv1), vulB1\\2 (adult vulval toroid cells), and ventral nerve cord of wild-type and mutant worms. In comparison to existing methods, the developed technique is capable of orienting the worm at any desired angle and maintaining the orientation while providing access to the worm for potential post-manipulation assays. This versatile tool can be potentially used in various applications such as neurobehavioral imaging, neuronal ablation, microinjection, and electrophysiology. PMID:27990213
Sensitivity images for multi-view ultrasonic array inspection
NASA Astrophysics Data System (ADS)
Budyn, Nicolas; Bevan, Rhodri; Croxford, Anthony J.; Zhang, Jie; Wilcox, Paul D.; Kashubin, Artem; Cawley, Peter
2018-04-01
The multi-view total focusing method (TFM) is an imaging technique for ultrasonic full matrix array data that typically exploits ray paths with zero, one or two internal reflections in the inspected object and for all combinations of longitudinal and transverse modes. The fusion of this vast quantity of views is expected to increase the reliability of ultrasonic inspection; however, it is not trivial to determine which views and which areas are the most suited for the detection of a given type and orientation of defect. This work introduces sensitivity images that give the expected response of a defect in any part of the inspected object and for any view. These images are based on a ray-based analytical forward model. They can be used to determine which views and which areas lead to the highest probability of detection of the defect. They can also be used for quantitatively analyzing the effects of the parameters of the inspection (probe angle and position, for example) on the overall probability of detection. Finally, they can be used to rescale TFM images so that the different views have comparable amplitudes. This methodology is applied to experimental data and discussed.
Effect of silhouetting and inversion on view invariance in the monkey inferotemporal cortex
2017-01-01
We effortlessly recognize objects across changes in viewpoint, but we know relatively little about the features that underlie viewpoint invariance in the brain. Here, we set out to characterize how viewpoint invariance in monkey inferior temporal (IT) neurons is influenced by two image manipulations—silhouetting and inversion. Reducing an object into its silhouette removes internal detail, so this would reveal how much viewpoint invariance depends on the external contours. Inverting an object retains but rearranges features, so this would reveal how much viewpoint invariance depends on the arrangement and orientation of features. Our main findings are 1) view invariance is weakened by silhouetting but not by inversion; 2) view invariance was stronger in neurons that generalized across silhouetting and inversion; 3) neuronal responses to natural objects matched early with that of silhouettes and only later to that of inverted objects, indicative of coarse-to-fine processing; and 4) the impact of silhouetting and inversion depended on object structure. Taken together, our results elucidate the underlying features and dynamics of view-invariant object representations in the brain. NEW & NOTEWORTHY We easily recognize objects across changes in viewpoint, but the underlying features are unknown. Here, we show that view invariance in the monkey inferotemporal cortex is driven mainly by external object contours and is not specialized for object orientation. We also find that the responses to natural objects match with that of their silhouettes early in the response, and with inverted versions later in the response—indicative of a coarse-to-fine processing sequence in the brain. PMID:28381484
Effect of silhouetting and inversion on view invariance in the monkey inferotemporal cortex.
Ratan Murty, N Apurva; Arun, S P
2017-07-01
We effortlessly recognize objects across changes in viewpoint, but we know relatively little about the features that underlie viewpoint invariance in the brain. Here, we set out to characterize how viewpoint invariance in monkey inferior temporal (IT) neurons is influenced by two image manipulations-silhouetting and inversion. Reducing an object into its silhouette removes internal detail, so this would reveal how much viewpoint invariance depends on the external contours. Inverting an object retains but rearranges features, so this would reveal how much viewpoint invariance depends on the arrangement and orientation of features. Our main findings are 1 ) view invariance is weakened by silhouetting but not by inversion; 2 ) view invariance was stronger in neurons that generalized across silhouetting and inversion; 3 ) neuronal responses to natural objects matched early with that of silhouettes and only later to that of inverted objects, indicative of coarse-to-fine processing; and 4 ) the impact of silhouetting and inversion depended on object structure. Taken together, our results elucidate the underlying features and dynamics of view-invariant object representations in the brain. NEW & NOTEWORTHY We easily recognize objects across changes in viewpoint, but the underlying features are unknown. Here, we show that view invariance in the monkey inferotemporal cortex is driven mainly by external object contours and is not specialized for object orientation. We also find that the responses to natural objects match with that of their silhouettes early in the response, and with inverted versions later in the response-indicative of a coarse-to-fine processing sequence in the brain. Copyright © 2017 the American Physiological Society.
NASA Astrophysics Data System (ADS)
Liu, Qingsheng; Liang, Li; Liu, Gaohuan; Huang, Chong
2017-09-01
Vegetation often exists as patch in arid and semi-arid region throughout the world. Vegetation patch can be effectively monitored by remote sensing images. However, not all satellite platforms are suitable to study quasi-circular vegetation patch. This study compares fine (GF-1) and coarse (CBERS-04) resolution platforms, specifically focusing on the quasicircular vegetation patches in the Yellow River Delta (YRD), China. Vegetation patch features (area, shape) were extracted from GF-1 and CBERS-04 imagery using unsupervised classifier (K-Means) and object-oriented approach (Example-based feature extraction with SVM classifier) in order to analyze vegetation patterns. These features were then compared using vector overlay and differencing, and the Root Mean Squared Error (RMSE) was used to determine if the mapped vegetation patches were significantly different. Regardless of K-Means or Example-based feature extraction with SVM classification, it was found that the area of quasi-circular vegetation patches from visual interpretation from QuickBird image (ground truth data) was greater than that from both of GF-1 and CBERS-04, and the number of patches detected from GF-1 data was more than that of CBERS-04 image. It was seen that without expert's experience and professional training on object-oriented approach, K-Means was better than example-based feature extraction with SVM for detecting the patch. It indicated that CBERS-04 could be used to detect the patch with area of more than 300 m2, but GF-1 data was a sufficient source for patch detection in the YRD. However, in the future, finer resolution platforms such as Worldview are needed to gain more detailed insight on patch structures and components and formation mechanism.
The generation and use of numerical shape models for irregular Solar System objects
NASA Technical Reports Server (NTRS)
Simonelli, Damon P.; Thomas, Peter C.; Carcich, Brian T.; Veverka, Joseph
1993-01-01
We describe a procedure that allows the efficient generation of numerical shape models for irregular Solar System objects, where a numerical model is simply a table of evenly spaced body-centered latitudes and longitudes and their associated radii. This modeling technique uses a combination of data from limbs, terminators, and control points, and produces shape models that have some important advantages over analytical shape models. Accurate numerical shape models make it feasible to study irregular objects with a wide range of standard scientific analysis techniques. These applications include the determination of moments of inertia and surface gravity, the mapping of surface locations and structural orientations, photometric measurement and analysis, the reprojection and mosaicking of digital images, and the generation of albedo maps. The capabilities of our modeling procedure are illustrated through the development of an accurate numerical shape model for Phobos and the production of a global, high-resolution, high-pass-filtered digital image mosaic of this Martian moon. Other irregular objects that have been modeled, or are being modeled, include the asteroid Gaspra and the satellites Deimos, Amalthea, Epimetheus, Janus, Hyperion, and Proteus.
Vertical or horizontal orientation of foot radiographs does not affect image interpretation
Ferran, Nicholas Antonio; Ball, Luke; Maffulli, Nicola
2012-01-01
Summary This study determined whether the orientation of dorsoplantar and oblique foot radiographs has an effect on radiograph interpretation. A test set of 50 consecutive foot radiographs were selected (25 with fractures, and 25 normal), and duplicated in the horizontal orientation. The images were randomly arranged, numbered 1 through 100, and analysed by six image interpreters. Vertical and horizontal area under the ROC curve, accuracy, sensitivity and specificity were calculated for each image interpreter. There was no significant difference in the area under the ROC curve, accuracy, sensitivity or specificity of image interpretation between images viewed in the vertical or horizontal orientation. While conventions for display of radiographs may help to improve the development of an efficient visual search strategy in trainees, and allow for standardisation of publication of radiographic images, variation from the convention in clinical practice does not appear to affect the sensitivity or specificity of image interpretation. PMID:23738310
Jayaprakash, Paul T
2017-09-01
Often cited reliability test on video superimposition method integrated scaling face-images in relation to skull-images, tragus-auditory meatus relationship in addition to exocanthion-Whitnall's tubercle relationship when orientating the skull-image and wipe mode imaging in addition to mix mode imaging when obtaining skull-face image overlay and evaluating the goodness of match. However, a report that found higher false positive matches in computer assisted superimposition method transited from the above foundational concepts and relied on images of unspecified sizes that are lesser than 'life-size', frontal plane landmarks in the skull- and face- images alone for orientating the skull-image and mix images alone for evaluating the goodness of match. Recently, arguing the use of 'life-size' images as 'archaic', the authors who tested the reliability in the computer assisted superimposition method have denied any method transition. This article describes that the use of images of unspecified sizes at lesser than 'life-size' eliminates the only possibility to quantify parameters during superimposition which alone enables dynamic skull orientation when overlaying a skull-image with a face-image in an anatomically acceptable orientation. The dynamic skull orientation process mandatorily requires aligning the tragus in the 2D face-image with the auditory meatus in the 3D skull-image for anatomically orientating the skull-image in relation to the posture in the face-image, a step not mentioned by the authors describing the computer assisted superimposition method. Furthermore, mere reliance on mix type images during image overlay eliminates the possibility to assess the relationship between the leading edges of the skull- and face-image outlines as also specific area match among the corresponding craniofacial organs during superimposition. Indicating the possibility of increased false positive matches as a consequence of the above method transitions, the need for testing the reliability in the superimposition method adopting concepts that are considered safe is stressed. Copyright © 2017 Elsevier B.V. All rights reserved.
The information extraction of Gannan citrus orchard based on the GF-1 remote sensing image
NASA Astrophysics Data System (ADS)
Wang, S.; Chen, Y. L.
2017-02-01
The production of Gannan oranges is the largest in China, which occupied an important part in the world. The extraction of citrus orchard quickly and effectively has important significance for fruit pathogen defense, fruit production and industrial planning. The traditional spectra extraction method of citrus orchard based on pixel has a lower classification accuracy, difficult to avoid the “pepper phenomenon”. In the influence of noise, the phenomenon that different spectrums of objects have the same spectrum is graveness. Taking Xunwu County citrus fruit planting area of Ganzhou as the research object, aiming at the disadvantage of the lower accuracy of the traditional method based on image element classification method, a decision tree classification method based on object-oriented rule set is proposed. Firstly, multi-scale segmentation is performed on the GF-1 remote sensing image data of the study area. Subsequently the sample objects are selected for statistical analysis of spectral features and geometric features. Finally, combined with the concept of decision tree classification, a variety of empirical values of single band threshold, NDVI, band combination and object geometry characteristics are used hierarchically to execute the information extraction of the research area, and multi-scale segmentation and hierarchical decision tree classification is implemented. The classification results are verified with the confusion matrix, and the overall Kappa index is 87.91%.
Integration of object-oriented knowledge representation with the CLIPS rule based system
NASA Technical Reports Server (NTRS)
Logie, David S.; Kamil, Hasan
1990-01-01
The paper describes a portion of the work aimed at developing an integrated, knowledge based environment for the development of engineering-oriented applications. An Object Representation Language (ORL) was implemented in C++ which is used to build and modify an object-oriented knowledge base. The ORL was designed in such a way so as to be easily integrated with other representation schemes that could effectively reason with the object base. Specifically, the integration of the ORL with the rule based system C Language Production Systems (CLIPS), developed at the NASA Johnson Space Center, will be discussed. The object-oriented knowledge representation provides a natural means of representing problem data as a collection of related objects. Objects are comprised of descriptive properties and interrelationships. The object-oriented model promotes efficient handling of the problem data by allowing knowledge to be encapsulated in objects. Data is inherited through an object network via the relationship links. Together, the two schemes complement each other in that the object-oriented approach efficiently handles problem data while the rule based knowledge is used to simulate the reasoning process. Alone, the object based knowledge is little more than an object-oriented data storage scheme; however, the CLIPS inference engine adds the mechanism to directly and automatically reason with that knowledge. In this hybrid scheme, the expert system dynamically queries for data and can modify the object base with complete access to all the functionality of the ORL from rules.
Strength and coherence of binocular rivalry depends on shared stimulus complexity.
Alais, David; Melcher, David
2007-01-01
Presenting incompatible images to the eyes results in alternations of conscious perception, a phenomenon known as binocular rivalry. We examined rivalry using either simple stimuli (oriented gratings) or coherent visual objects (faces, houses etc). Two rivalry characteristics were measured: Depth of rivalry suppression and coherence of alternations. Rivalry between coherent visual objects exhibits deep suppression and coherent rivalry, whereas rivalry between gratings exhibits shallow suppression and piecemeal rivalry. Interestingly, rivalry between a simple and a complex stimulus displays the same characteristics (shallow and piecemeal) as rivalry between two simple stimuli. Thus, complex stimuli fail to rival globally unless the fellow stimulus is also global. We also conducted a face adaptation experiment. Adaptation to rivaling faces improved subsequent face discrimination (as expected), but adaptation to a rivaling face/grating pair did not. To explain this, we suggest rivalry must be an early and local process (at least initially), instigated by the failure of binocular fusion, which can then become globally organized by feedback from higher-level areas when both rivalry stimuli are global, so that rivalry tends to oscillate coherently. These globally assembled images then flow through object processing areas, with the dominant image gaining in relative strength in a form of 'biased competition', therefore accounting for the deeper suppression of global images. In contrast, when only one eye receives a global image, local piecemeal suppression from the fellow eye overrides the organizing effects of global feedback to prevent coherent image formation. This indicates the primacy of local over global processes in rivalry.
Object-oriented productivity metrics
NASA Technical Reports Server (NTRS)
Connell, John L.; Eller, Nancy
1992-01-01
Software productivity metrics are useful for sizing and costing proposed software and for measuring development productivity. Estimating and measuring source lines of code (SLOC) has proven to be a bad idea because it encourages writing more lines of code and using lower level languages. Function Point Analysis is an improved software metric system, but it is not compatible with newer rapid prototyping and object-oriented approaches to software development. A process is presented here for counting object-oriented effort points, based on a preliminary object-oriented analysis. It is proposed that this approach is compatible with object-oriented analysis, design, programming, and rapid prototyping. Statistics gathered on actual projects are presented to validate the approach.
Mapping cardiac fiber orientations from high-resolution DTI to high-frequency 3D ultrasound
NASA Astrophysics Data System (ADS)
Qin, Xulei; Wang, Silun; Shen, Ming; Zhang, Xiaodong; Wagner, Mary B.; Fei, Baowei
2014-03-01
The orientation of cardiac fibers affects the anatomical, mechanical, and electrophysiological properties of the heart. Although echocardiography is the most common imaging modality in clinical cardiac examination, it can only provide the cardiac geometry or motion information without cardiac fiber orientations. If the patient's cardiac fiber orientations can be mapped to his/her echocardiography images in clinical examinations, it may provide quantitative measures for diagnosis, personalized modeling, and image-guided cardiac therapies. Therefore, this project addresses the feasibility of mapping personalized cardiac fiber orientations to three-dimensional (3D) ultrasound image volumes. First, the geometry of the heart extracted from the MRI is translated to 3D ultrasound by rigid and deformable registration. Deformation fields between both geometries from MRI and ultrasound are obtained after registration. Three different deformable registration methods were utilized for the MRI-ultrasound registration. Finally, the cardiac fiber orientations imaged by DTI are mapped to ultrasound volumes based on the extracted deformation fields. Moreover, this study also demonstrated the ability to simulate electricity activations during the cardiac resynchronization therapy (CRT) process. The proposed method has been validated in two rat hearts and three canine hearts. After MRI/ultrasound image registration, the Dice similarity scores were more than 90% and the corresponding target errors were less than 0.25 mm. This proposed approach can provide cardiac fiber orientations to ultrasound images and can have a variety of potential applications in cardiac imaging.
Axelrod, Daniel
2012-08-01
Microscopic fluorescent samples of interest to cell and molecular biology are commonly embedded in an aqueous medium near a solid surface that is coated with a thin film such as a lipid multilayer, collagen, acrylamide, or a cell wall. Both excitation and emission of fluorescent single molecules near film-coated surfaces are strongly affected by the proximity of the coated surface, the film thickness, its refractive index and the fluorophore's orientation. For total internal reflection excitation, multiple reflections in the film can lead to resonance peaks in the evanescent intensity versus incidence angle curve. For emission, multiple reflections arising from the fluorophore's near field emission can create a distinct intensity pattern in both the back focal plane and the image plane of a high aperture objective. This theoretical analysis discusses how these features can be used to report film thickness and refractive index, and fluorophore axial position and orientation. © 2012 The Author Journal of Microscopy © 2012 Royal Microscopical Society.
Dictionary-based fiber orientation estimation with improved spatial consistency.
Ye, Chuyang; Prince, Jerry L
2018-02-01
Diffusion magnetic resonance imaging (dMRI) has enabled in vivo investigation of white matter tracts. Fiber orientation (FO) estimation is a key step in tract reconstruction and has been a popular research topic in dMRI analysis. In particular, the sparsity assumption has been used in conjunction with a dictionary-based framework to achieve reliable FO estimation with a reduced number of gradient directions. Because image noise can have a deleterious effect on the accuracy of FO estimation, previous works have incorporated spatial consistency of FOs in the dictionary-based framework to improve the estimation. However, because FOs are only indirectly determined from the mixture fractions of dictionary atoms and not modeled as variables in the objective function, these methods do not incorporate FO smoothness directly, and their ability to produce smooth FOs could be limited. In this work, we propose an improvement to Fiber Orientation Reconstruction using Neighborhood Information (FORNI), which we call FORNI+; this method estimates FOs in a dictionary-based framework where FO smoothness is better enforced than in FORNI alone. We describe an objective function that explicitly models the actual FOs and the mixture fractions of dictionary atoms. Specifically, it consists of data fidelity between the observed signals and the signals represented by the dictionary, pairwise FO dissimilarity that encourages FO smoothness, and weighted ℓ 1 -norm terms that ensure the consistency between the actual FOs and the FO configuration suggested by the dictionary representation. The FOs and mixture fractions are then jointly estimated by minimizing the objective function using an iterative alternating optimization strategy. FORNI+ was evaluated on a simulation phantom, a physical phantom, and real brain dMRI data. In particular, in the real brain dMRI experiment, we have qualitatively and quantitatively evaluated the reproducibility of the proposed method. Results demonstrate that FORNI+ produces FOs with better quality compared with competing methods. Copyright © 2017 Elsevier B.V. All rights reserved.
Software components for medical image visualization and surgical planning
NASA Astrophysics Data System (ADS)
Starreveld, Yves P.; Gobbi, David G.; Finnis, Kirk; Peters, Terence M.
2001-05-01
Purpose: The development of new applications in medical image visualization and surgical planning requires the completion of many common tasks such as image reading and re-sampling, segmentation, volume rendering, and surface display. Intra-operative use requires an interface to a tracking system and image registration, and the application requires basic, easy to understand user interface components. Rapid changes in computer and end-application hardware, as well as in operating systems and network environments make it desirable to have a hardware and operating system as an independent collection of reusable software components that can be assembled rapidly to prototype new applications. Methods: Using the OpenGL based Visualization Toolkit as a base, we have developed a set of components that implement the above mentioned tasks. The components are written in both C++ and Python, but all are accessible from Python, a byte compiled scripting language. The components have been used on the Red Hat Linux, Silicon Graphics Iris, Microsoft Windows, and Apple OS X platforms. Rigorous object-oriented software design methods have been applied to ensure hardware independence and a standard application programming interface (API). There are components to acquire, display, and register images from MRI, MRA, CT, Computed Rotational Angiography (CRA), Digital Subtraction Angiography (DSA), 2D and 3D ultrasound, video and physiological recordings. Interfaces to various tracking systems for intra-operative use have also been implemented. Results: The described components have been implemented and tested. To date they have been used to create image manipulation and viewing tools, a deep brain functional atlas, a 3D ultrasound acquisition and display platform, a prototype minimally invasive robotic coronary artery bypass graft planning system, a tracked neuro-endoscope guidance system and a frame-based stereotaxy neurosurgery planning tool. The frame-based stereotaxy module has been licensed and certified for use in a commercial image guidance system. Conclusions: It is feasible to encapsulate image manipulation and surgical guidance tasks in individual, reusable software modules. These modules allow for faster development of new applications. The strict application of object oriented software design methods allows individual components of such a system to make the transition from the research environment to a commercial one.
Ureter smooth muscle cell orientation in rat is predominantly longitudinal.
Spronck, Bart; Merken, Jort J; Reesink, Koen D; Kroon, Wilco; Delhaas, Tammo
2014-01-01
In ureter peristalsis, the orientation of the contracting smooth muscle cells is essential, yet current descriptions of orientation and composition of the smooth muscle layer in human as well as in rat ureter are inconsistent. The present study aims to improve quantification of smooth muscle orientation in rat ureters as a basis for mechanistic understanding of peristalsis. A crucial step in our approach is to use two-photon laser scanning microscopy and image analysis providing objective, quantitative data on smooth muscle cell orientation in intact ureters, avoiding the usual sectioning artifacts. In 36 rat ureter segments, originating from a proximal, middle or distal site and from a left or right ureter, we found close to the adventitia a well-defined longitudinal smooth muscle orientation. Towards the lamina propria, the orientation gradually became slightly more disperse, yet the main orientation remained longitudinal. We conclude that smooth muscle cell orientation in rat ureter is predominantly longitudinal, though the orientation gradually becomes more disperse towards the proprial side. These findings do not support identification of separate layers. The observed longitudinal orientation suggests that smooth muscle contraction would rather cause local shortening of the ureter, than cause luminal constriction. However, the net-like connective tissue of the ureter wall may translate local longitudinal shortening into co-local luminal constriction, facilitating peristalsis. Our quantitative, minimally invasive approach is a crucial step towards more mechanistic insight into ureter peristalsis, and may also be used to study smooth muscle cell orientation in other tube-like structures like gut and blood vessels.
NASA Astrophysics Data System (ADS)
Liu, Chunhui; Zhang, Duona; Zhao, Xintao
2018-03-01
Saliency detection in synthetic aperture radar (SAR) images is a difficult problem. This paper proposed a multitask saliency detection (MSD) model for the saliency detection task of SAR images. We extract four features of the SAR image, which include the intensity, orientation, uniqueness, and global contrast, as the input of the MSD model. The saliency map is generated by the multitask sparsity pursuit, which integrates the multiple features collaboratively. Detection of different scale features is also taken into consideration. Subjective and objective evaluation of the MSD model verifies its effectiveness. Based on the saliency maps obtained by the MSD model, we apply the saliency map of the SAR image to the SAR and color optical image fusion. The experimental results of real data show that the saliency map obtained by the MSD model helps to improve the fusion effect, and the salient areas in the SAR image can be highlighted in the fusion results.
Research of BRDF effects on remote sensing imagery
NASA Astrophysics Data System (ADS)
Nina, Peng; Kun, Wang; Tao, Li; Yang, Pan
2011-08-01
The gray distribution and contrast of the optical satellite remote sensing imagery in the same kind of ground surface acquired by sensor is quite different, it depends not only on the satellite's observation and the sun incidence orientation but also the structural and optical properties of the surface. Therefore, the objectives of this research are to analyze the different BRDF characters of soil, vegetation, water and urban surface and also their BRDF effects on the quality of satellite image through 6S radiative transfer model. Furthermore, the causation of CCD blooming and spilling by ground reflectance is discussed by using QUICKBIRD image data and the corresponding ground image data. The general conclusion of BRDF effects on remote sensing imagery is proposed.
Object-oriented requirements analysis: A quick tour
NASA Technical Reports Server (NTRS)
Berard, Edward V.
1990-01-01
Of all the approaches to software development, an object-oriented approach appears to be both the most beneficial and the most popular. The description of the object-oriented approach is presented in the form of the view graphs.
High Performance Object-Oriented Scientific Programming in Fortran 90
NASA Technical Reports Server (NTRS)
Norton, Charles D.; Decyk, Viktor K.; Szymanski, Boleslaw K.
1997-01-01
We illustrate how Fortran 90 supports object-oriented concepts by example of plasma particle computations on the IBM SP. Our experience shows that Fortran 90 and object-oriented methodology give high performance while providing a bridge from Fortran 77 legacy codes to modern programming principles. All of our object-oriented Fortran 90 codes execute more quickly thatn the equeivalent C++ versions, yet the abstraction modelling capabilities used for scentific programming are comparably powereful.
Completely optical orientation determination for an unstabilized aerial three-line camera
NASA Astrophysics Data System (ADS)
Wohlfeil, Jürgen
2010-10-01
Aerial line cameras allow the fast acquisition of high-resolution images at low costs. Unfortunately the measurement of the camera's orientation with the necessary rate and precision is related with large effort, unless extensive camera stabilization is used. But also stabilization implicates high costs, weight, and power consumption. This contribution shows that it is possible to completely derive the absolute exterior orientation of an unstabilized line camera from its images and global position measurements. The presented approach is based on previous work on the determination of the relative orientation of subsequent lines using optical information from the remote sensing system. The relative orientation is used to pre-correct the line images, in which homologous points can reliably be determined using the SURF operator. Together with the position measurements these points are used to determine the absolute orientation from the relative orientations via bundle adjustment of a block of overlapping line images. The approach was tested at a flight with the DLR's RGB three-line camera MFC. To evaluate the precision of the resulting orientation the measurements of a high-end navigation system and ground control points are used.
Danescu, Radu; Ciurte, Anca; Turcu, Vlad
2014-01-01
The space around the Earth is filled with man-made objects, which orbit the planet at altitudes ranging from hundreds to tens of thousands of kilometers. Keeping an eye on all objects in Earth's orbit, useful and not useful, operational or not, is known as Space Surveillance. Due to cost considerations, the space surveillance solutions beyond the Low Earth Orbit region are mainly based on optical instruments. This paper presents a solution for real-time automatic detection and ranging of space objects of altitudes ranging from below the Medium Earth Orbit up to 40,000 km, based on two low cost observation systems built using commercial cameras and marginally professional telescopes, placed 37 km apart, operating as a large baseline stereovision system. The telescopes are pointed towards any visible region of the sky, and the system is able to automatically calibrate the orientation parameters using automatic matching of reference stars from an online catalog, with a very high tolerance for the initial guess of the sky region and camera orientation. The difference between the left and right image of a synchronized stereo pair is used for automatic detection of the satellite pixels, using an original difference computation algorithm that is capable of high sensitivity and a low false positive rate. The use of stereovision provides a strong means of removing false positives, and avoids the need for prior knowledge of the orbits observed, the system being able to detect at the same time all types of objects that fall within the measurement range and are visible on the image. PMID:24521941
Mapping ecological states in a complex environment
NASA Astrophysics Data System (ADS)
Steele, C. M.; Bestelmeyer, B.; Burkett, L. M.; Ayers, E.; Romig, K.; Slaughter, A.
2013-12-01
The vegetation of northern Chihuahuan Desert rangelands is sparse, heterogeneous and for most of the year, consists of a large proportion of non-photosynthetic material. The soils in this area are spectrally bright and variable in their reflectance properties. Both factors provide challenges to the application of remote sensing for estimating canopy variables (e.g., leaf area index, biomass, percentage canopy cover, primary production). Additionally, with reference to current paradigms of rangeland health assessment, remotely-sensed estimates of canopy variables have limited practical use to the rangeland manager if they are not placed in the context of ecological site and ecological state. To address these challenges, we created a multifactor classification system based on the USDA-NRCS ecological site schema and associated state-and-transition models to map ecological states on desert rangelands in southern New Mexico. Applying this system using per-pixel image processing techniques and multispectral, remotely sensed imagery raised other challenges. Per-pixel image classification relies upon the spectral information in each pixel alone, there is no reference to the spatial context of the pixel and its relationship with its neighbors. Ecological state classes may have direct relevance to managers but the non-unique spectral properties of different ecological state classes in our study area means that per-pixel classification of multispectral data performs poorly in discriminating between different ecological states. We found that image interpreters who are familiar with the landscape and its associated ecological site descriptions perform better than per-pixel classification techniques in assigning ecological states. However, two important issues affect manual classification methods: subjectivity of interpretation and reproducibility of results. An alternative to per-pixel classification and manual interpretation is object-based image analysis. Object-based image analysis provides a platform for classification that more closely resembles human recognition of objects within a remotely sensed image. The analysis presented here compares multiple thematic maps created for test locations on the USDA-ARS Jornada Experimental Range ranch. Three study sites in different pastures, each 300 ha in size, were selected for comparison on the basis of their ecological site type (';Clayey', ';Sandy' and a combination of both) and the degree of complexity of vegetation cover. Thematic maps were produced for each study site using (i) manual interpretation of digital aerial photography (by five independent interpreters); (ii) object-oriented, decision-tree classification of fine and moderate spatial resolution imagery (Quickbird; Landsat Thematic Mapper) and (iii) ground survey. To identify areas of uncertainty, we compared agreement in location, areal extent and class assignation between 5 independently produced, manually-digitized ecological state maps and with the map created from ground survey. Location, areal extent and class assignation of the map produced by object-oriented classification was also assessed with reference to the ground survey map.
NASA Astrophysics Data System (ADS)
Al-Durgham, Kaleel; Lichti, Derek D.; Kuntze, Gregor; Ronsky, Janet
2017-06-01
High-speed biplanar videoradiography, or clinically referred to as dual fluoroscopy (DF), imaging systems are being used increasingly for skeletal kinematics analysis. Typically, a DF system comprises two X-ray sources, two image intensifiers and two high-speed video cameras. The combination of these elements provides time-series image pairs of articulating bones of a joint, which permits the measurement of bony rotation and translation in 3D at high temporal resolution (e.g., 120-250 Hz). Assessment of the accuracy of 3D measurements derived from DF imaging has been the subject of recent research efforts by several groups, however with methodological limitations. This paper presents a novel and simple accuracy assessment procedure based on using precise photogrammetric tools. We address the fundamental photogrammetry principles for the accuracy evaluation of an imaging system. Bundle adjustment with selfcalibration is used for the estimation of the system parameters. The bundle adjustment calibration uses an appropriate sensor model and applies free-network constraints and relative orientation stability constraints for a precise estimation of the system parameters. A photogrammetric intersection of time-series image pairs is used for the 3D reconstruction of a rotating planar object. A point-based registration method is used to combine the 3D coordinates from the intersection and independently surveyed coordinates. The final DF accuracy measure is reported as the distance between 3D coordinates from image intersection and the independently surveyed coordinates. The accuracy assessment procedure is designed to evaluate the accuracy over the full DF image format and a wide range of object rotation. Experiment of reconstruction of a rotating planar object reported an average positional error of 0.44 +/- 0.2 mm in the derived 3D coordinates (minimum 0.05 and maximum 1.2 mm).
A case of complex regional pain syndrome with agnosia for object orientation.
Robinson, Gail; Cohen, Helen; Goebel, Andreas
2011-07-01
This systematic investigation of the neurocognitive correlates of complex regional pain syndrome (CRPS) in a single case also reports agnosia for object orientation in the context of persistent CRPS. We report a patient (JW) with severe long-standing CRPS who had no difficulty identifying and naming line drawings of objects presented in 1 of 4 cardinal orientations. In contrast, he was extremely poor at reorienting these objects into the correct upright orientation and in judging whether an object was upright or not. Moreover, JW made orientation errors when copying drawings of objects, and he also showed features of mirror reversal in writing single words and reading single letters. The findings are discussed in relation to accounts of visual processing. Agnosia for object orientation is the term for impaired knowledge of an object's orientation despite good recognition and naming of the same misoriented object. This defect has previously only been reported in patients with major structural brain lesions. The neuroanatomical correlates are discussed. The patient had no structural brain lesion, raising the possibility that nonstructural reorganisation of cortical networks may be responsible for his deficits. Other patients with CRPS may have related neurocognitive defects. Crown Copyright © 2011. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Liu, Likun
2018-01-01
In the field of remote sensing image processing, remote sensing image segmentation is a preliminary step for later analysis of remote sensing image processing and semi-auto human interpretation, fully-automatic machine recognition and learning. Since 2000, a technique of object-oriented remote sensing image processing method and its basic thought prevails. The core of the approach is Fractal Net Evolution Approach (FNEA) multi-scale segmentation algorithm. The paper is intent on the research and improvement of the algorithm, which analyzes present segmentation algorithms and selects optimum watershed algorithm as an initialization. Meanwhile, the algorithm is modified by modifying an area parameter, and then combining area parameter with a heterogeneous parameter further. After that, several experiments is carried on to prove the modified FNEA algorithm, compared with traditional pixel-based method (FCM algorithm based on neighborhood information) and combination of FNEA and watershed, has a better segmentation result.
Building Shadow Detection from Ghost Imagery
NASA Astrophysics Data System (ADS)
Zhou, G.; Sha, J.; Yue, T.; Wang, Q.; Liu, X.; Huang, S.; Pan, Q.; Wei, J.
2018-05-01
Shadow is one of the basic features of remote sensing image, it expresses a lot of information of the object which is loss or interference, and the removal of shadow is always a difficult problem to remote sensing image processing. In this paper, it is mainly analyzes the characteristics and properties of shadows from the ghost image (traditional orthorectification). The DBM and the interior and exterior orientation elements of the image are used to calculate the zenith angle of sun. Then this paper combines the scope of the architectural shadows which has be determined by the zenith angle of sun with the region growing method to make the detection of architectural shadow areas. This method lays a solid foundation for the shadow of the repair from the ghost image later. It will greatly improve the accuracy of shadow detection from buildings and make it more conducive to solve the problem of urban large-scale aerial imagines.
Perceived visual speed constrained by image segmentation
NASA Technical Reports Server (NTRS)
Verghese, P.; Stone, L. S.
1996-01-01
Little is known about how or where the visual system parses the visual scene into objects or surfaces. However, it is generally assumed that the segmentation and grouping of pieces of the image into discrete entities is due to 'later' processing stages, after the 'early' processing of the visual image by local mechanisms selective for attributes such as colour, orientation, depth, and motion. Speed perception is also thought to be mediated by early mechanisms tuned for speed. Here we show that manipulating the way in which an image is parsed changes the way in which local speed information is processed. Manipulations that cause multiple stimuli to appear as parts of a single patch degrade speed discrimination, whereas manipulations that perceptually divide a single large stimulus into parts improve discrimination. These results indicate that processes as early as speed perception may be constrained by the parsing of the visual image into discrete entities.
Zero- to low-field MRI with averaging of concomitant gradient fields.
Meriles, Carlos A; Sakellariou, Dimitris; Trabesinger, Andreas H; Demas, Vasiliki; Pines, Alexander
2005-02-08
Magnetic resonance imaging (MRI) encounters fundamental limits in circumstances in which the static magnetic field is not sufficiently strong to truncate unwanted, so-called concomitant components of the gradient field. This limitation affects the attainable optimal image fidelity and resolution most prominently in low-field imaging. In this article, we introduce the use of pulsed magnetic-field averaging toward relaxing these constraints. It is found that the image of an object can be retrieved by pulsed low fields in the presence of the full spatial variation of the imaging encoding gradient field even in the absence of the typical uniform high-field time-independent contribution. In addition, error-compensation schemes can be introduced through the application of symmetrized pulse sequences. Such schemes substantially mitigate artifacts related to evolution in strong magnetic-field gradients, magnetic fields that vary in direction and orientation, and imperfections of the applied field pulses.
Orientation priming of grasping decision for drawings of objects and blocks, and words.
Chainay, Hanna; Naouri, Lucie; Pavec, Alice
2011-05-01
This study tested the influence of orientation priming on grasping decisions. Two groups of 20 healthy participants had to select a preferred grasping orientation (horizontal, vertical) based on drawings of everyday objects, geometric blocks or object names. Three priming conditions were used: congruent, incongruent and neutral. The facilitating effects of priming were observed in the grasping decision task for drawings of objects and blocks but not object names. The visual information about congruent orientation in the prime quickened participants' responses but had no effect on response accuracy. The results are discussed in the context of the hypothesis that an object automatically potentiates grasping associated with it, and that the on-line visual information is necessary for grasping potentiation to occur. The possibility that the most frequent orientation of familiar objects might be included in object-action representation is also discussed.
Data filtering with support vector machines in geometric camera calibration.
Ergun, B; Kavzoglu, T; Colkesen, I; Sahin, C
2010-02-01
The use of non-metric digital cameras in close-range photogrammetric applications and machine vision has become a popular research agenda. Being an essential component of photogrammetric evaluation, camera calibration is a crucial stage for non-metric cameras. Therefore, accurate camera calibration and orientation procedures have become prerequisites for the extraction of precise and reliable 3D metric information from images. The lack of accurate inner orientation parameters can lead to unreliable results in the photogrammetric process. A camera can be well defined with its principal distance, principal point offset and lens distortion parameters. Different camera models have been formulated and used in close-range photogrammetry, but generally sensor orientation and calibration is performed with a perspective geometrical model by means of the bundle adjustment. In this study, support vector machines (SVMs) using radial basis function kernel is employed to model the distortions measured for Olympus Aspherical Zoom lens Olympus E10 camera system that are later used in the geometric calibration process. It is intended to introduce an alternative approach for the on-the-job photogrammetric calibration stage. Experimental results for DSLR camera with three focal length settings (9, 18 and 36 mm) were estimated using bundle adjustment with additional parameters, and analyses were conducted based on object point discrepancies and standard errors. Results show the robustness of the SVMs approach on the correction of image coordinates by modelling total distortions on-the-job calibration process using limited number of images.
Nurminen, Lauri; Angelucci, Alessandra
2014-01-01
The responses of neurons in primary visual cortex (V1) to stimulation of their receptive field (RF) are modulated by stimuli in the RF surround. This modulation is suppressive when the stimuli in the RF and surround are of similar orientation, but less suppressive or facilitatory when they are cross-oriented. Similarly, in human vision surround stimuli selectively suppress the perceived contrast of a central stimulus. Although the properties of surround modulation have been thoroughly characterized in many species, cortical areas and sensory modalities, its role in perception remains unknown. Here we argue that surround modulation in V1 consists of multiple components having different spatio-temporal and tuning properties, generated by different neural circuits and serving different visual functions. One component arises from LGN afferents, is fast, untuned for orientation, and spatially restricted to the surround region nearest to the RF (the near-surround); its function is to normalize V1 cell responses to local contrast. Intra-V1 horizontal connections contribute a slower, narrowly orientation-tuned component to near-surround modulation, whose function is to increase the coding efficiency of natural images in manner that leads to the extraction of object boundaries. The third component is generated by topdown feedback connections to V1, is fast, broadly orientation-tuned, and extends into the far-surround; its function is to enhance the salience of behaviorally relevant visual features. Far- and near-surround modulation, thus, act as parallel mechanisms: the former quickly detects and guides saccades/attention to salient visual scene locations, the latter segments object boundaries in the scene. PMID:25204770
Method for radiometric calibration of an endoscope's camera and light source
NASA Astrophysics Data System (ADS)
Rai, Lav; Higgins, William E.
2008-03-01
An endoscope is a commonly used instrument for performing minimally invasive visual examination of the tissues inside the body. A physician uses the endoscopic video images to identify tissue abnormalities. The images, however, are highly dependent on the optical properties of the endoscope and its orientation and location with respect to the tissue structure. The analysis of endoscopic video images is, therefore, purely subjective. Studies suggest that the fusion of endoscopic video images (providing color and texture information) with virtual endoscopic views (providing structural information) can be useful for assessing various pathologies for several applications: (1) surgical simulation, training, and pedagogy; (2) the creation of a database for pathologies; and (3) the building of patient-specific models. Such fusion requires both geometric and radiometric alignment of endoscopic video images in the texture space. Inconsistent estimates of texture/color of the tissue surface result in seams when multiple endoscopic video images are combined together. This paper (1) identifies the endoscope-dependent variables to be calibrated for objective and consistent estimation of surface texture/color and (2) presents an integrated set of methods to measure them. Results show that the calibration method can be successfully used to estimate objective color/texture values for simple planar scenes, whereas uncalibrated endoscopes performed very poorly for the same tests.
Crowding by a single bar: probing pattern recognition mechanisms in the visual periphery.
Põder, Endel
2014-11-06
Whereas visual crowding does not greatly affect the detection of the presence of simple visual features, it heavily inhibits combining them into recognizable objects. Still, crowding effects have rarely been directly related to general pattern recognition mechanisms. In this study, pattern recognition mechanisms in visual periphery were probed using a single crowding feature. Observers had to identify the orientation of a rotated T presented briefly in a peripheral location. Adjacent to the target, a single bar was presented. The bar was either horizontal or vertical and located in a random direction from the target. It appears that such a crowding bar has very strong and regular effects on the identification of the target orientation. The observer's responses are determined by approximate relative positions of basic visual features; exact image-based similarity to the target is not important. A version of the "standard model" of object recognition with second-order features explains the main regularities of the data. © 2014 ARVO.
Xie, Weizhen; Zhang, Weiwei
2017-09-01
Negative emotion sometimes enhances memory (higher accuracy and/or vividness, e.g., flashbulb memories). The present study investigates whether it is the qualitative (precision) or quantitative (the probability of successful retrieval) aspect of memory that drives these effects. In a visual long-term memory task, observers memorized colors (Experiment 1a) or orientations (Experiment 1b) of sequentially presented everyday objects under negative, neutral, or positive emotions induced with International Affective Picture System images. In a subsequent test phase, observers reconstructed objects' colors or orientations using the method of adjustment. We found that mnemonic precision was enhanced under the negative condition relative to the neutral and positive conditions. In contrast, the probability of successful retrieval was comparable across the emotion conditions. Furthermore, the boost in memory precision was associated with elevated subjective feelings of remembering (vividness and confidence) and metacognitive sensitivity in Experiment 2. Altogether, these findings suggest a novel precision-based account for emotional memories. Copyright © 2017 Elsevier B.V. All rights reserved.
Marginal space learning for efficient detection of 2D/3D anatomical structures in medical images.
Zheng, Yefeng; Georgescu, Bogdan; Comaniciu, Dorin
2009-01-01
Recently, marginal space learning (MSL) was proposed as a generic approach for automatic detection of 3D anatomical structures in many medical imaging modalities [1]. To accurately localize a 3D object, we need to estimate nine pose parameters (three for position, three for orientation, and three for anisotropic scaling). Instead of exhaustively searching the original nine-dimensional pose parameter space, only low-dimensional marginal spaces are searched in MSL to improve the detection speed. In this paper, we apply MSL to 2D object detection and perform a thorough comparison between MSL and the alternative full space learning (FSL) approach. Experiments on left ventricle detection in 2D MRI images show MSL outperforms FSL in both speed and accuracy. In addition, we propose two novel techniques, constrained MSL and nonrigid MSL, to further improve the efficiency and accuracy. In many real applications, a strong correlation may exist among pose parameters in the same marginal spaces. For example, a large object may have large scaling values along all directions. Constrained MSL exploits this correlation for further speed-up. The original MSL only estimates the rigid transformation of an object in the image, therefore cannot accurately localize a nonrigid object under a large deformation. The proposed nonrigid MSL directly estimates the nonrigid deformation parameters to improve the localization accuracy. The comparison experiments on liver detection in 226 abdominal CT volumes demonstrate the effectiveness of the proposed methods. Our system takes less than a second to accurately detect the liver in a volume.
Optical neural network system for pose determination of spinning satellites
NASA Technical Reports Server (NTRS)
Lee, Andrew; Casasent, David
1990-01-01
An optical neural network architecture and algorithm based on a Hopfield optimization network are presented for multitarget tracking. This tracker utilizes a neuron for every possible target track, and a quadratic energy function of neural activities which is minimized using gradient descent neural evolution. The neural net tracker is demonstrated as part of a system for determining position and orientation (pose) of spinning satellites with respect to a robotic spacecraft. The input to the system is time sequence video from a single camera. Novelty detection and filtering are utilized to locate and segment novel regions from the input images. The neural net multitarget tracker determines the correspondences (or tracks) of the novel regions as a function of time, and hence the paths of object (satellite) parts. The path traced out by a given part or region is approximately elliptical in image space, and the position, shape and orientation of the ellipse are functions of the satellite geometry and its pose. Having a geometric model of the satellite, and the elliptical path of a part in image space, the three-dimensional pose of the satellite is determined. Digital simulation results using this algorithm are presented for various satellite poses and lighting conditions.
Object-Oriented Programming When Developing Software in Geology and Geophysics
NASA Astrophysics Data System (ADS)
Ahmadulin, R. K.; Bakanovskaya, L. N.
2017-01-01
The paper reviews the role of object-oriented programming when developing software in geology and geophysics. Main stages have been identified at which it is worthwhile to apply principles of object-oriented programming when developing software in geology and geophysics. The research was based on a number of problems solved in Geology and Petroleum Production Institute. Distinctive features of these problems are given and areas of application of the object-oriented approach are identified. Developing applications in the sphere of geology and geophysics has shown that the process of creating such products is simplified due to the use of object-oriented programming, firstly when designing structures for data storage and graphical user interfaces.
Checking an integrated model of web accessibility and usability evaluation for disabled people.
Federici, Stefano; Micangeli, Andrea; Ruspantini, Irene; Borgianni, Stefano; Corradi, Fabrizio; Pasqualotto, Emanuele; Olivetti Belardinelli, Marta
2005-07-08
A combined objective-oriented and subjective-oriented method for evaluating accessibility and usability of web pages for students with disability was tested. The objective-oriented approach is devoted to verifying the conformity of interfaces to standard rules stated by national and international organizations responsible for web technology standardization, such as W3C. Conversely, the subjective-oriented approach allows assessing how the final users interact with the artificial system, accessing levels of user satisfaction based on personal factors and environmental barriers. Five kinds of measurements were applied as objective-oriented and subjective-oriented tests. Objective-oriented evaluations were performed on the Help Desk web page for students with disability, included in the website of a large Italian state university. Subjective-oriented tests were administered to 19 students labeled as disabled on the basis of their own declaration at the University enrolment: 13 students were tested by means of the SUMI test and six students by means of the 'Cooperative evaluation'. Objective-oriented and subjective-oriented methods highlighted different and sometimes conflicting results. Both methods have pointed out much more consistency regarding levels of accessibility than of usability. Since usability is largely affected by individual differences in user's own (dis)abilities, subjective-oriented measures underscored the fact that blind students encountered much more web surfing difficulties.
The Assignment of Scale to Object-Oriented Software Measures
NASA Technical Reports Server (NTRS)
Neal, Ralph D.; Weistroffer, H. Roland; Coppins, Richard J.
1997-01-01
In order to improve productivity (and quality), measurement of specific aspects of software has become imperative. As object oriented programming languages have become more widely used, metrics designed specifically for object-oriented software are required. Recently a large number of new metrics for object- oriented software has appeared in the literature. Unfortunately, many of these proposed metrics have not been validated to measure what they purport to measure. In this paper fifty (50) of these metrics are analyzed.
Characterization of fiber diameter using image analysis
NASA Astrophysics Data System (ADS)
Baheti, S.; Tunak, M.
2017-10-01
Due to high surface area and porosity, the applications of nanofibers have increased in recent years. In the production process, determination of average fiber diameter and fiber orientation is crucial for quality assessment. The objective of present study was to compare the relative performance of different methods discussed in literature for estimation of fiber diameter. In this work, the existing automated fiber diameter analysis software packages available in literature were developed and validated based on simulated images of known fiber diameter. Finally, all methods were compared for their reliable and accurate estimation of fiber diameter in electro spun nanofiber membranes based on obtained mean and standard deviation.
Automated extraction of metadata from remotely sensed satellite imagery
NASA Technical Reports Server (NTRS)
Cromp, Robert F.
1991-01-01
The paper discusses research in the Intelligent Data Management project at the NASA/Goddard Space Flight Center, with emphasis on recent improvements in low-level feature detection algorithms for performing real-time characterization of images. Images, including MSS and TM data, are characterized using neural networks and the interpretation of the neural network output by an expert system for subsequent archiving in an object-oriented data base. The data show the applicability of this approach to different arrangements of low-level remote sensing channels. The technique works well when the neural network is trained on data similar to the data used for testing.
Mono- and multilayers of molecular spoked carbazole wheels on graphite
Aggarwal, A Vikas; Kalle, Daniel; Höger, Sigurd
2014-01-01
Summary Self-assembled monolayers of a molecular spoked wheel (a shape-persistent macrocycle with an intraannular spoke/hub system) and its synthetic precursor are investigated by scanning tunneling microscopy (STM) at the liquid/solid interface of 1-octanoic acid and highly oriented pyrolytic graphite. The submolecularly resolved STM images reveal that the molecules indeed behave as more or less rigid objects of certain sizes and shapes – depending on their chemical structures. In addition, the images provide insight into the multilayer growth of the molecular spoked wheels (MSWs), where the first adlayer acts as a template for the commensurate adsorption of molecules in the second layer. PMID:25550744
Mono- and multilayers of molecular spoked carbazole wheels on graphite.
Jester, Stefan-S; Aggarwal, A Vikas; Kalle, Daniel; Höger, Sigurd
2014-01-01
Self-assembled monolayers of a molecular spoked wheel (a shape-persistent macrocycle with an intraannular spoke/hub system) and its synthetic precursor are investigated by scanning tunneling microscopy (STM) at the liquid/solid interface of 1-octanoic acid and highly oriented pyrolytic graphite. The submolecularly resolved STM images reveal that the molecules indeed behave as more or less rigid objects of certain sizes and shapes - depending on their chemical structures. In addition, the images provide insight into the multilayer growth of the molecular spoked wheels (MSWs), where the first adlayer acts as a template for the commensurate adsorption of molecules in the second layer.
Computer-assisted virtual autopsy using surgical navigation techniques.
Ebert, Lars Christian; Ruder, Thomas D; Martinez, Rosa Maria; Flach, Patricia M; Schweitzer, Wolf; Thali, Michael J; Ampanozi, Garyfalia
2015-01-01
OBJECTIVE; Virtual autopsy methods, such as postmortem CT and MRI, are increasingly being used in forensic medicine. Forensic investigators with little to no training in diagnostic radiology and medical laypeople such as state's attorneys often find it difficult to understand the anatomic orientation of axial postmortem CT images. We present a computer-assisted system that permits postmortem CT datasets to be quickly and intuitively resliced in real time at the body to narrow the gap between radiologic imaging and autopsy. Our system is a potentially valuable tool for planning autopsies, showing findings to medical laypeople, and teaching CT anatomy, thus further closing the gap between radiology and forensic pathology.
Considerations of persistence and security in CHOICES, an object-oriented operating system
NASA Technical Reports Server (NTRS)
Campbell, Roy H.; Madany, Peter W.
1990-01-01
The current design of the CHOICES persistent object implementation is summarized, and research in progress is outlined. CHOICES is implemented as an object-oriented system, and persistent objects appear to simplify and unify many functions of the system. It is demonstrated that persistent data can be accessed through an object-oriented file system model as efficiently as by an existing optimized commercial file system. The object-oriented file system can be specialized to provide an object store for persistent objects. The problems that arise in building an efficient persistent object scheme in a 32-bit virtual address space that only uses paging are described. Despite its limitations, the solution presented allows quite large numbers of objects to be active simultaneously, and permits sharing and efficient method calls.
Lu, Hao; Zhao, Kaichun; Wang, Xiaochu; You, Zheng; Huang, Kaoli
2016-01-01
Bio-inspired imaging polarization navigation which can provide navigation information and is capable of sensing polarization information has advantages of high-precision and anti-interference over polarization navigation sensors that use photodiodes. Although all types of imaging polarimeters exist, they may not qualify for the research on the imaging polarization navigation algorithm. To verify the algorithm, a real-time imaging orientation determination system was designed and implemented. Essential calibration procedures for the type of system that contained camera parameter calibration and the inconsistency of complementary metal oxide semiconductor calibration were discussed, designed, and implemented. Calibration results were used to undistort and rectify the multi-camera system. An orientation determination experiment was conducted. The results indicated that the system could acquire and compute the polarized skylight images throughout the calibrations and resolve orientation by the algorithm to verify in real-time. An orientation determination algorithm based on image processing was tested on the system. The performance and properties of the algorithm were evaluated. The rate of the algorithm was over 1 Hz, the error was over 0.313°, and the population standard deviation was 0.148° without any data filter. PMID:26805851
Choices, Frameworks and Refinement
NASA Technical Reports Server (NTRS)
Campbell, Roy H.; Islam, Nayeem; Johnson, Ralph; Kougiouris, Panos; Madany, Peter
1991-01-01
In this paper we present a method for designing operating systems using object-oriented frameworks. A framework can be refined into subframeworks. Constraints specify the interactions between the subframeworks. We describe how we used object-oriented frameworks to design Choices, an object-oriented operating system.
NASA Astrophysics Data System (ADS)
Lebedev, M. A.; Stepaniants, D. G.; Komarov, D. V.; Vygolov, O. V.; Vizilter, Yu. V.; Zheltov, S. Yu.
2014-08-01
The paper addresses a promising visualization concept related to combination of sensor and synthetic images in order to enhance situation awareness of a pilot during an aircraft landing. A real-time algorithm for a fusion of a sensor image, acquired by an onboard camera, and a synthetic 3D image of the external view, generated in an onboard computer, is proposed. The pixel correspondence between the sensor and the synthetic images is obtained by an exterior orientation of a "virtual" camera using runway points as a geospatial reference. The runway points are detected by the Projective Hough Transform, which idea is to project the edge map onto a horizontal plane in the object space (the runway plane) and then to calculate intensity projections of edge pixels on different directions of intensity gradient. The performed experiments on simulated images show that on a base glide path the algorithm provides image fusion with pixel accuracy, even in the case of significant navigation errors.
Long-term scale adaptive tracking with kernel correlation filters
NASA Astrophysics Data System (ADS)
Wang, Yueren; Zhang, Hong; Zhang, Lei; Yang, Yifan; Sun, Mingui
2018-04-01
Object tracking in video sequences has broad applications in both military and civilian domains. However, as the length of input video sequence increases, a number of problems arise, such as severe object occlusion, object appearance variation, and object out-of-view (some portion or the entire object leaves the image space). To deal with these problems and identify the object being tracked from cluttered background, we present a robust appearance model using Speeded Up Robust Features (SURF) and advanced integrated features consisting of the Felzenszwalb's Histogram of Oriented Gradients (FHOG) and color attributes. Since re-detection is essential in long-term tracking, we develop an effective object re-detection strategy based on moving area detection. We employ the popular kernel correlation filters in our algorithm design, which facilitates high-speed object tracking. Our evaluation using the CVPR2013 Object Tracking Benchmark (OTB2013) dataset illustrates that the proposed algorithm outperforms reference state-of-the-art trackers in various challenging scenarios.
Orientational imaging of a single plasmonic nanoparticle using dark-field hyperspectral imaging
NASA Astrophysics Data System (ADS)
Mehta, Nishir; Mahigir, Amirreza; Veronis, Georgios; Gartia, Manas Ranjan
2017-08-01
Orientation of plasmonic nanostructures is an important feature in many nanoscale applications such as catalyst, biosensors DNA interactions, protein detections, hotspot of surface enhanced Raman spectroscopy (SERS), and fluorescence resonant energy transfer (FRET) experiments. However, due to diffraction limit, it is challenging to obtain the exact orientation of the nanostructure using standard optical microscope. Hyperspectral Imaging Microscopy is a state-of-the-art visualization technology that combines modern optics with hyperspectral imaging and computer system to provide the identification and quantitative spectral analysis of nano- and microscale structures. In this work, initially we use transmitted dark field imaging technique to locate single nanoparticle on a glass substrate. Then we employ hyperspectral imaging technique at the same spot to investigate orientation of single nanoparticle. No special tagging or staining of nanoparticle has been done, as more likely required in traditional microscopy techniques. Different orientations have been identified by carefully understanding and calibrating shift in spectral response from each different orientations of similar sized nanoparticles. Wavelengths recorded are between 300 nm to 900 nm. The orientations measured by hyperspectral microscopy was validated using finite difference time domain (FDTD) electrodynamics calculations and scanning electron microscopy (SEM) analysis. The combination of high resolution nanometer-scale imaging techniques and the modern numerical modeling capacities thus enables a meaningful advance in our knowledge of manipulating and fabricating shaped nanostructures. This work will advance our understanding of the behavior of small nanoparticle clusters useful for sensing, nanomedicine, and surface sciences.
Is the perception of 3D shape from shading based on assumed reflectance and illumination?
Todd, James T; Egan, Eric J L; Phillips, Flip
2014-01-01
The research described in the present article was designed to compare three types of image shading: one generated with a Lambertian BRDF and homogeneous illumination such that image intensity was determined entirely by local surface orientation irrespective of position; one that was textured with a linear intensity gradient, such that image intensity was determined entirely by local surface position irrespective of orientation; and another that was generated with a Lambertian BRDF and inhomogeneous illumination such that image intensity was influenced by both position and orientation. A gauge figure adjustment task was used to measure observers' perceptions of local surface orientation on the depicted surfaces, and the probe points included 60 pairs of regions that both had the same orientation. The results show clearly that observers' perceptions of these three types of stimuli were remarkably similar, and that probe regions with similar apparent orientations could have large differences in image intensity. This latter finding is incompatible with any process for computing shape from shading that assumes any plausible reflectance function combined with any possible homogeneous illumination.
Is the perception of 3D shape from shading based on assumed reflectance and illumination?
Todd, James T.; Egan, Eric J. L.; Phillips, Flip
2014-01-01
The research described in the present article was designed to compare three types of image shading: one generated with a Lambertian BRDF and homogeneous illumination such that image intensity was determined entirely by local surface orientation irrespective of position; one that was textured with a linear intensity gradient, such that image intensity was determined entirely by local surface position irrespective of orientation; and another that was generated with a Lambertian BRDF and inhomogeneous illumination such that image intensity was influenced by both position and orientation. A gauge figure adjustment task was used to measure observers' perceptions of local surface orientation on the depicted surfaces, and the probe points included 60 pairs of regions that both had the same orientation. The results show clearly that observers' perceptions of these three types of stimuli were remarkably similar, and that probe regions with similar apparent orientations could have large differences in image intensity. This latter finding is incompatible with any process for computing shape from shading that assumes any plausible reflectance function combined with any possible homogeneous illumination. PMID:26034561
THE LIMITED EFFECT OF COINCIDENT ORIENTATION ON THE CHOICE OF INTRINSIC AXIS (.).
Li, Jing; Su, Wei
2015-06-01
The allocentric system computes and represents general object-to-object spatial relationships to provide a spatial frame of reference other than the egocentric system. The intrinsic frame-of-reference system theory, which suggests people learn the locations of objects based upon an intrinsic axis, is important in research about the allocentric system. The purpose of the current study was to determine whether the effect of coincident orientation on the choice of intrinsic axis was limited. Two groups of participants (24 men, 24 women; M age = 24 yr., SD = 2) encoded different spatial layouts in which the objects shared the coincident orientation of 315° and 225° separately at learning perspective (0°). The response pattern of partial-scene-recognition task following learning reflected different strategies for choosing the intrinsic axis under different conditions. Under the 315° object-orientation condition, the objects' coincident orientation was as important as the symmetric axis in the choice of the intrinsic axis. However, participants were more likely to choose the symmetric axis as the intrinsic axis under the 225° object-orientation condition. The results suggest the effect of coincident orientation on the choice of intrinsic axis is limited.
3D TEM reconstruction and segmentation process of laminar bio-nanocomposites
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iturrondobeitia, M., E-mail: maider.iturrondobeitia@ehu.es; Okariz, A.; Fernandez-Martinez, R.
2015-03-30
The microstructure of laminar bio-nanocomposites (Poly (lactic acid)(PLA)/clay) depends on the amount of clay platelet opening after integration with the polymer matrix and determines the final properties of the material. Transmission electron microscopy (TEM) technique is the only one that can provide a direct observation of the layer dispersion and the degree of exfoliation. However, the orientation of the clay platelets, which affects the final properties, is practically immeasurable from a single 2D TEM image. This issue can be overcome using transmission electron tomography (ET), a technique that allows the complete 3D characterization of the structure, including the measurement ofmore » the orientation of clay platelets, their morphology and their 3D distribution. ET involves a 3D reconstruction of the study volume and a subsequent segmentation of the study object. Currently, accurate segmentation is performed manually, which is inefficient and tedious. The aim of this work is to propose an objective/automated segmentation methodology process of a 3D TEM tomography reconstruction. In this method the segmentation threshold is optimized by minimizing the variation of the dimensions of the segmented objects and matching the segmented V{sub clay} (%) and the actual one. The method is first validated using a fictitious set of objects, and then applied on a nanocomposite.« less
An Object-Oriented Approach for Analyzing CALIPSO's Profile Observations
NASA Astrophysics Data System (ADS)
Trepte, C. R.
2016-12-01
The CALIPSO satellite mission is a pioneering international partnership between NASA and the French Space Agency, CNES. Since launch on 28 April 2006, CALIPSO has been acquiring near-continuous lidar profile observations of clouds and aerosols in the Earth's atmosphere. Many studies have profitably used these observations to advance our understanding of climate, weather and air quality. For the most part, however, these studies have considered CALIPSO profile measurements independent from one another and have not related each to neighboring or family observations within a cloud element or aerosol feature. In this presentation we describe an alternative approach that groups measurements into objects visually identified from CALIPSO browse images. The approach makes use of the Visualization of CALIPSO (VOCAL) software tool that enables a user to outline a region of interest and save coordinates into a database. The selected features or objects can then be analyzed to explore spatial correlations over the feature's domain and construct bulk statistical properties for each structure. This presentation will show examples that examine cirrus and dust layers and will describe how this object-oriented approach can provide added insight into physical processes beyond conventional statistical treatments. It will further show results with combined measurements from other A-Train sensors to highlight advantages of viewing features in this manner.
A client/server system for Internet access to biomedical text/image databanks.
Thoma, G R; Long, L R; Berman, L E
1996-01-01
Internet access to mixed text/image databanks is finding application in the medical world. An example is a database of medical X-rays and associated data consisting of demographic, socioeconomic, physician's exam, medical laboratory and other information collected as part of a nationwide health survey conducted by the government. Another example is a collection of digitized cryosection images, CT and MR taken of cadavers as part of the National Library of Medicine's Visible Human Project. In both cases, the challenge is to provide access to both the image and the associated text for a wide end user community to create atlases, conduct epidemiological studies, to develop image-specific algorithms for compression, enhancement and other types of image processing, among many other applications. The databanks mentioned above are being created in prototype form. This paper describes the prototype system developed for the archiving of the data and the client software to enable a broad range of end users to access the archive, retrieve text and image data, display the data and manipulate the images. System design considerations include; data organization in a relational database management system with object-oriented extensions; a hierarchical organization of the image data by different resolution levels for different user classes; client design based on common hardware and software platforms incorporating SQL search capability, X Window, Motif and TAE (a development environment supporting rapid prototyping and management of graphic-oriented user interfaces); potential to include ultra high resolution display monitors as a user option; intuitive user interface paradigm for building complex queries; and contrast enhancement, magnification and mensuration tools for better viewing by the user.
NASA Astrophysics Data System (ADS)
Feizizadeh, Bakhtiar; Blaschke, Thomas; Tiede, Dirk; Moghaddam, Mohammad Hossein Rezaei
2017-09-01
This article presents a method of object-based image analysis (OBIA) for landslide delineation and landslide-related change detection from multi-temporal satellite images. It uses both spatial and spectral information on landslides, through spectral analysis, shape analysis, textural measurements using a gray-level co-occurrence matrix (GLCM), and fuzzy logic membership functionality. Following an initial segmentation step, particular combinations of various information layers were investigated to generate objects. This was achieved by applying multi-resolution segmentation to IRS-1D, SPOT-5, and ALOS satellite imagery in sequential steps of feature selection and object classification, and using slope and flow direction derivatives from a digital elevation model together with topographically-oriented gray level co-occurrence matrices. Fuzzy membership values were calculated for 11 different membership functions using 20 landslide objects from a landslide training data. Six fuzzy operators were used for the final classification and the accuracies of the resulting landslide maps were compared. A Fuzzy Synthetic Evaluation (FSE) approach was adapted for validation of the results and for an accuracy assessment using the landslide inventory database. The FSE approach revealed that the AND operator performed best with an accuracy of 93.87% for 2005 and 94.74% for 2011, closely followed by the MEAN Arithmetic operator, while the OR and AND (*) operators yielded relatively low accuracies. An object-based change detection was then applied to monitor landslide-related changes that occurred in northern Iran between 2005 and 2011. Knowledge rules to detect possible landslide-related changes were developed by evaluating all possible landslide-related objects for both time steps.
Three Object-Oriented enhancement for EPICS
NASA Astrophysics Data System (ADS)
Osberg, E. A.; Dohan, D. A.; Richter, R.; Biggs, R.; Chillara, K.; Wade, D.; Bossom, J.
1994-12-01
In line with our group's intention of producing software using, where possible, Object-Oriented methodologies and techniques in the development of RF control systems, we have undertaken three projects to enhance the EPICS software environment. Two of the projects involve interfaces to EPICs Channel Access from Object-Oriented languages. The third is an enhancement to the EPICS State Notation Language to better support the Shlaer-Mellor Object-Oriented Analysis and Design Methodology. This paper discusses the motivation, approaches, results and future directions of these three projects.
Optical computed tomography for spatially isotropic four-dimensional imaging of live single cells
Kelbauskas, Laimonas; Shetty, Rishabh; Cao, Bin; Wang, Kuo-Chen; Smith, Dean; Wang, Hong; Chao, Shi-Hui; Gangaraju, Sandhya; Ashcroft, Brian; Kritzer, Margaret; Glenn, Honor; Johnson, Roger H.; Meldrum, Deirdre R.
2017-01-01
Quantitative three-dimensional (3D) computed tomography (CT) imaging of living single cells enables orientation-independent morphometric analysis of the intricacies of cellular physiology. Since its invention, x-ray CT has become indispensable in the clinic for diagnostic and prognostic purposes due to its quantitative absorption-based imaging in true 3D that allows objects of interest to be viewed and measured from any orientation. However, x-ray CT has not been useful at the level of single cells because there is insufficient contrast to form an image. Recently, optical CT has been developed successfully for fixed cells, but this technology called Cell-CT is incompatible with live-cell imaging due to the use of stains, such as hematoxylin, that are not compatible with cell viability. We present a novel development of optical CT for quantitative, multispectral functional 4D (three spatial + one spectral dimension) imaging of living single cells. The method applied to immune system cells offers truly isotropic 3D spatial resolution and enables time-resolved imaging studies of cells suspended in aqueous medium. Using live-cell optical CT, we found a heterogeneous response to mitochondrial fission inhibition in mouse macrophages and differential basal remodeling of small (0.1 to 1 fl) and large (1 to 20 fl) nuclear and mitochondrial structures on a 20- to 30-s time scale in human myelogenous leukemia cells. Because of its robust 3D measurement capabilities, live-cell optical CT represents a powerful new tool in the biomedical research field. PMID:29226240
Rocket instrument for far-UV spectrophotometry of faint astronomical objects.
Hartig, G F; Fastie, W G; Davidsen, A F
1980-03-01
A sensitive sounding rocket instrument for moderate (~10-A) resolution far-UV (lambda1160-lambda1750-A) spectrophotometry of faint astronomical objects has been developed. The instrument employs a photon-counting microchannel plate imaging detector and a concave grating spectrograph behind a 40-cm Dall-Kirkham telescope. A unique remote-control pointing system, incorporating an SIT vidicon aspect camera, two star trackers, and a tone-encoded command telemetry link, permits the telescope to be oriented to within 5 arc sec of any target for which suitable guide stars can be found. The design, construction, calibration, and flight performance of the instrument are discussed.
NASA Astrophysics Data System (ADS)
Marwaha, Richa; Kumar, Anil; Kumar, Arumugam Senthil
2015-01-01
Our primary objective was to explore a classification algorithm for thermal hyperspectral data. Minimum noise fraction is applied to thermal hyperspectral data and eight pixel-based classifiers, i.e., constrained energy minimization, matched filter, spectral angle mapper (SAM), adaptive coherence estimator, orthogonal subspace projection, mixture-tuned matched filter, target-constrained interference-minimized filter, and mixture-tuned target-constrained interference minimized filter are tested. The long-wave infrared (LWIR) has not yet been exploited for classification purposes. The LWIR data contain emissivity and temperature information about an object. A highest overall accuracy of 90.99% was obtained using the SAM algorithm for the combination of thermal data with a colored digital photograph. Similarly, an object-oriented approach is applied to thermal data. The image is segmented into meaningful objects based on properties such as geometry, length, etc., which are grouped into pixels using a watershed algorithm and an applied supervised classification algorithm, i.e., support vector machine (SVM). The best algorithm in the pixel-based category is the SAM technique. SVM is useful for thermal data, providing a high accuracy of 80.00% at a scale value of 83 and a merge value of 90, whereas for the combination of thermal data with a colored digital photograph, SVM gives the highest accuracy of 85.71% at a scale value of 82 and a merge value of 90.
Orientation selective deep brain stimulation
NASA Astrophysics Data System (ADS)
Lehto, Lauri J.; Slopsema, Julia P.; Johnson, Matthew D.; Shatillo, Artem; Teplitzky, Benjamin A.; Utecht, Lynn; Adriany, Gregor; Mangia, Silvia; Sierra, Alejandra; Low, Walter C.; Gröhn, Olli; Michaeli, Shalom
2017-02-01
Objective. Target selectivity of deep brain stimulation (DBS) therapy is critical, as the precise locus and pattern of the stimulation dictates the degree to which desired treatment responses are achieved and adverse side effects are avoided. There is a clear clinical need to improve DBS technology beyond currently available stimulation steering and shaping approaches. We introduce orientation selective neural stimulation as a concept to increase the specificity of target selection in DBS. Approach. This concept, which involves orienting the electric field along an axonal pathway, was tested in the corpus callosum of the rat brain by freely controlling the direction of the electric field on a plane using a three-electrode bundle, and monitoring the response of the neurons using functional magnetic resonance imaging (fMRI). Computational models were developed to further analyze axonal excitability for varied electric field orientation. Main results. Our results demonstrated that the strongest fMRI response was observed when the electric field was oriented parallel to the axons, while almost no response was detected with the perpendicular orientation of the electric field relative to the primary fiber tract. These results were confirmed by computational models of the experimental paradigm quantifying the activation of radially distributed axons while varying the primary direction of the electric field. Significance. The described strategies identify a new course for selective neuromodulation paradigms in DBS based on axonal fiber orientation.
Object oriented development of engineering software using CLIPS
NASA Technical Reports Server (NTRS)
Yoon, C. John
1991-01-01
Engineering applications involve numeric complexity and manipulations of a large amount of data. Traditionally, numeric computation has been the concern in developing an engineering software. As engineering application software became larger and more complex, management of resources such as data, rather than the numeric complexity, has become the major software design problem. Object oriented design and implementation methodologies can improve the reliability, flexibility, and maintainability of the resulting software; however, some tasks are better solved with the traditional procedural paradigm. The C Language Integrated Production System (CLIPS), with deffunction and defgeneric constructs, supports the procedural paradigm. The natural blending of object oriented and procedural paradigms has been cited as the reason for the popularity of the C++ language. The CLIPS Object Oriented Language's (COOL) object oriented features are more versatile than C++'s. A software design methodology based on object oriented and procedural approaches appropriate for engineering software, and to be implemented in CLIPS was outlined. A method for sensor placement for Space Station Freedom is being implemented in COOL as a sample problem.
Three-quarter views are subjectively good because object orientation is uncertain.
Niimi, Ryosuke; Yokosawa, Kazuhiko
2009-04-01
Because the objects that surround us are three-dimensional, their appearance and our visual perception of them change depending on an object's orientation relative to a viewpoint. One of the most remarkable effects of object orientation is that viewers prefer three-quarter views over others, such as front and back, but the exact source of this preference has not been firmly established. We show that object orientation perception of the three-quarter view is relatively imprecise and that this impreciseness is related to preference for this view. Human vision is largely insensitive to variations among different three-quarter views (e.g., 45 degrees vs. 50 degrees ); therefore, the three-quarter view is perceived as if it corresponds to a wide range of orientations. In other words, it functions as the typical representation of the object.
Object-oriented models of cognitive processing.
Mather, G
2001-05-01
Information-processing models of vision and cognition are inspired by procedural programming languages. Models that emphasize object-based representations are closely related to object-oriented programming languages. The concepts underlying object-oriented languages provide a theoretical framework for cognitive processing that differs markedly from that offered by procedural languages. This framework is well-suited to a system designed to deal flexibly with discrete objects and unpredictable events in the world.
Fast and accurate edge orientation processing during object manipulation
Flanagan, J Randall; Johansson, Roland S
2018-01-01
Quickly and accurately extracting information about a touched object’s orientation is a critical aspect of dexterous object manipulation. However, the speed and acuity of tactile edge orientation processing with respect to the fingertips as reported in previous perceptual studies appear inadequate in these respects. Here we directly establish the tactile system’s capacity to process edge-orientation information during dexterous manipulation. Participants extracted tactile information about edge orientation very quickly, using it within 200 ms of first touching the object. Participants were also strikingly accurate. With edges spanning the entire fingertip, edge-orientation resolution was better than 3° in our object manipulation task, which is several times better than reported in previous perceptual studies. Performance remained impressive even with edges as short as 2 mm, consistent with our ability to precisely manipulate very small objects. Taken together, our results radically redefine the spatial processing capacity of the tactile system. PMID:29611804
NASA Astrophysics Data System (ADS)
Lin, Ying-Tong; Chang, Kuo-Chen; Yang, Ci-Jian
2017-04-01
As the result of global warming in the past decades, Taiwan has experienced more and more extreme typhoons with hazardous massive landslides. In this study, we use object-oriented analysis method to classify landslide area at Baolai village by using Formosat-2 satellite images. We used for multiresolution segmented to generate the blocks, and used hierarchical logic to classified 5 different kinds of features. After that, classification the landslide into different type of landslide. Beside, we use stochastic procedure to integrate landslide susceptibility maps. This study assumed that in the extreme event, 2009 Typhoon Morakot, which precipitation goes to 1991.5mm in 5 days, and the highest landslide susceptible area. The results show that study area's landslide area was greatly changes, most of landslide was erosion by gully and made dip slope slide, or erosion by the stream, especially at undercut bank. From the landslide susceptibility maps, we know that the old landslide area have high potential to occur landslides in the extreme event. This study demonstrates the changing of landslide area and the landslide susceptible area. Keywords: Formosat-2, object-oriented, segmentation, classification, landslide, Baolai Village, SW Taiwan, FS
Oriented modulation for watermarking in direct binary search halftone images.
Guo, Jing-Ming; Su, Chang-Cheng; Liu, Yun-Fu; Lee, Hua; Lee, Jiann-Der
2012-09-01
In this paper, a halftoning-based watermarking method is presented. This method enables high pixel-depth watermark embedding, while maintaining high image quality. This technique is capable of embedding watermarks with pixel depths up to 3 bits without causing prominent degradation to the image quality. To achieve high image quality, the parallel oriented high-efficient direct binary search (DBS) halftoning is selected to be integrated with the proposed orientation modulation (OM) method. The OM method utilizes different halftone texture orientations to carry different watermark data. In the decoder, the least-mean-square-trained filters are applied for feature extraction from watermarked images in the frequency domain, and the naïve Bayes classifier is used to analyze the extracted features and ultimately to decode the watermark data. Experimental results show that the DBS-based OM encoding method maintains a high degree of image quality and realizes the processing efficiency and robustness to be adapted in printing applications.
Nie, Min; Ren, Jie; Li, Zhengjun; Niu, Jinhai; Qiu, Yihong; Zhu, Yisheng; Tong, Shanbao
2009-01-01
Without visual information, the blind people live in various hardships with shopping, reading, finding objects and etc. Therefore, we developed a portable auditory guide system, called SoundView, for visually impaired people. This prototype system consists of a mini-CCD camera, a digital signal processing unit and an earphone, working with built-in customizable auditory coding algorithms. Employing environment understanding techniques, SoundView processes the images from a camera and detects objects tagged with barcodes. The recognized objects in the environment are then encoded into stereo speech signals for the blind though an earphone. The user would be able to recognize the type, motion state and location of the interested objects with the help of SoundView. Compared with other visual assistant techniques, SoundView is object-oriented and has the advantages of cheap cost, smaller size, light weight, low power consumption and easy customization.
Zach, Bernhard; Hofer, Ernst; Asslaber, Martin; Ahammer, Helmut
2016-01-01
The human heart has a heterogeneous structure, which is characterized by different cell types and their spatial configurations. The physical structure, especially the fibre orientation and the interstitial fibrosis, determines the electrical excitation and in further consequence the contractility in macroscopic as well as in microscopic areas. Modern image processing methods and parameters could be used to describe the image content and image texture. In most cases the description of the texture is not satisfying because the fibre orientation, detected with common algorithms, is biased by elements such as fibrocytes or endothelial nuclei. The goal of this work is to figure out if cardiac tissue can be analysed and classified on a microscopic level by automated image processing methods with a focus on an accurate detection of the fibre orientation. Quantitative parameters for identification of textures of different complexity or pathological attributes inside the heart were determined. The focus was set on the detection of the fibre orientation, which was calculated on the basis of the cardiomyocytes' nuclei. It turned out that the orientation of these nuclei corresponded with a high precision to the fibre orientation in the image plane. Additionally, these nuclei also indicated very well the inclination of the fibre.
[Real-time detection and processing of medical signals under windows using Lcard analog interfaces].
Kuz'min, A A; Belozerov, A E; Pronin, T V
2008-01-01
Multipurpose modular software for an analog interface based on Lcard 761 is considered. Algorithms for pipeline processing of medical signals under Windows with dynamic control of computational resources are suggested. The software consists of user-friendly completable modifiable modules. The module hierarchy is based on object-oriented heritage principles, which make it possible to construct various real-time systems for long-term detection, processing, and imaging of multichannel medical signals.
High-Resolution Large-Field-of-View Ultrasound Breast Imager
2012-06-01
plane waves all having the same wave vector magnitude 0k but propagating in different directions . This observation forms the mathematical basis of the...origin of the object Fourier space and is oriented opposite the propagation direction of the probing plane wave field. Moreover, the 43 radius of...in water. Each element was electrically tuned to match to the 50-Ohm impedance of an RF Amplifier powered by a 4.0 MHz electrical signal from a
Multiscale vector fields for image pattern recognition
NASA Technical Reports Server (NTRS)
Low, Kah-Chan; Coggins, James M.
1990-01-01
A uniform processing framework for low-level vision computing in which a bank of spatial filters maps the image intensity structure at each pixel into an abstract feature space is proposed. Some properties of the filters and the feature space are described. Local orientation is measured by a vector sum in the feature space as follows: each filter's preferred orientation along with the strength of the filter's output determine the orientation and the length of a vector in the feature space; the vectors for all filters are summed to yield a resultant vector for a particular pixel and scale. The orientation of the resultant vector indicates the local orientation, and the magnitude of the vector indicates the strength of the local orientation preference. Limitations of the vector sum method are discussed. Investigations show that the processing framework provides a useful, redundant representation of image structure across orientation and scale.
HiCAT Software Infrastructure: Safe hardware control with object oriented Python
NASA Astrophysics Data System (ADS)
Moriarty, Christopher; Brooks, Keira; Soummer, Remi
2018-01-01
High contrast imaging for Complex Aperture Telescopes (HiCAT) is a testbed designed to demonstrate coronagraphy and wavefront control for segmented on-axis space telescopes such as envisioned for LUVOIR. To limit the air movements in the testbed room, software interfaces for several different hardware components were developed to completely automate operations. When developing software interfaces for many different pieces of hardware, unhandled errors are commonplace and can prevent the software from properly closing a hardware resource. Some fragile components (e.g. deformable mirrors) can be permanently damaged because of this. We present an object oriented Python-based infrastructure to safely automate hardware control and optical experiments. Specifically, conducting high-contrast imaging experiments while monitoring humidity and power status along with graceful shutdown processes even for unexpected errors. Python contains a construct called a “context manager” that allows you define code to run when a resource is opened or closed. Context managers ensure that a resource is properly closed, even when unhandled errors occur. Harnessing the context manager design, we also use Python’s multiprocessing library to monitor humidity and power status without interrupting the experiment. Upon detecting a safety problem, the master process sends an event to the child process that triggers the context managers to gracefully close any open resources. This infrastructure allows us to queue up several experiments and safely operate the testbed without a human in the loop.
Error analysis of satellite attitude determination using a vision-based approach
NASA Astrophysics Data System (ADS)
Carozza, Ludovico; Bevilacqua, Alessandro
2013-09-01
Improvements in communication and processing technologies have opened the doors to exploit on-board cameras to compute objects' spatial attitude using only the visual information from sequences of remote sensed images. The strategies and the algorithmic approach used to extract such information affect the estimation accuracy of the three-axis orientation of the object. This work presents a method for analyzing the most relevant error sources, including numerical ones, possible drift effects and their influence on the overall accuracy, referring to vision-based approaches. The method in particular focuses on the analysis of the image registration algorithm, carried out through on-purpose simulations. The overall accuracy has been assessed on a challenging case study, for which accuracy represents the fundamental requirement. In particular, attitude determination has been analyzed for small satellites, by comparing theoretical findings to metric results from simulations on realistic ground-truth data. Significant laboratory experiments, using a numerical control unit, have further confirmed the outcome. We believe that our analysis approach, as well as our findings in terms of error characterization, can be useful at proof-of-concept design and planning levels, since they emphasize the main sources of error for visual based approaches employed for satellite attitude estimation. Nevertheless, the approach we present is also of general interest for all the affine applicative domains which require an accurate estimation of three-dimensional orientation parameters (i.e., robotics, airborne stabilization).
NASA Astrophysics Data System (ADS)
Lancaster, N.; LeBlanc, D.; Bebis, G.; Nicolescu, M.
2015-12-01
Dune-field patterns are believed to behave as self-organizing systems, but what causes the patterns to form is still poorly understood. The most obvious (and in many cases the most significant) aspect of a dune system is the pattern of dune crest lines. Extracting meaningful features such as crest length, orientation, spacing, bifurcations, and merging of crests from image data can reveal important information about the specific dune-field morphological properties, development, and response to changes in boundary conditions, but manual methods are labor-intensive and time-consuming. We are developing the capability to recognize and characterize patterns of sand dunes on planetary surfaces. Our goal is to develop a robust methodology and the necessary algorithms for automated or semi-automated extraction of dune morphometric information from image data. Our main approach uses image processing methods to extract gradient information from satellite images of dune fields. Typically, the gradients have a dominant magnitude and orientation. In many cases, the images have two major dominant gradient orientations, for the sunny and shaded side of the dunes. A histogram of the gradient orientations is used to determine the dominant orientation. A threshold is applied to the image based on gradient orientations which agree with the dominant orientation. The contours of the binary image can then be used to determine the dune crest-lines, based on pixel intensity values. Once the crest-lines have been extracted, the morphological properties can be computed. We have tested our approach on a variety of images of linear and crescentic (transverse) dunes and compared dune detection algorithms with manually-digitized dune crest lines, achieving true positive values of 0.57-0.99; and false positives values of 0.30-0.67, indicating that out approach is generally robust.
Constraint processing in our extensible language for cooperative imaging system
NASA Astrophysics Data System (ADS)
Aoki, Minoru; Murao, Yo; Enomoto, Hajime
1996-02-01
The extensible WELL (Window-based elaboration language) has been developed using the concept of common platform, where both client and server can communicate with each other with support from a communication manager. This extensible language is based on an object oriented design by introducing constraint processing. Any kind of services including imaging in the extensible language is controlled by the constraints. Interactive functions between client and server are extended by introducing agent functions including a request-respond relation. Necessary service integrations are satisfied with some cooperative processes using constraints. Constraints are treated similarly to data, because the system should have flexibilities in the execution of many kinds of services. The similar control process is defined by using intentional logic. There are two kinds of constraints, temporal and modal constraints. Rendering the constraints, the predicate format as the relation between attribute values can be a warrant for entities' validity as data. As an imaging example, a processing procedure of interaction between multiple objects is shown as an image application for the extensible system. This paper describes how the procedure proceeds in the system, and that how the constraints work for generating moving pictures.
NASA Astrophysics Data System (ADS)
Ambekar Ramachandra Rao, Raghu; Mehta, Monal R.; Toussaint, Kimani C., Jr.
2010-02-01
We demonstrate the use of Fourier transform-second-harmonic generation (FT-SHG) imaging of collagen fibers as a means of performing quantitative analysis of obtained images of selected spatial regions in porcine trachea, ear, and cornea. Two quantitative markers, preferred orientation and maximum spatial frequency are proposed for differentiating structural information between various spatial regions of interest in the specimens. The ear shows consistent maximum spatial frequency and orientation as also observed in its real-space image. However, there are observable changes in the orientation and minimum feature size of fibers in the trachea indicating a more random organization. Finally, the analysis is applied to a 3D image stack of the cornea. It is shown that the standard deviation of the orientation is sensitive to the randomness in fiber orientation. Regions with variations in the maximum spatial frequency, but with relatively constant orientation, suggest that maximum spatial frequency is useful as an independent quantitative marker. We emphasize that FT-SHG is a simple, yet powerful, tool for extracting information from images that is not obvious in real space. This technique can be used as a quantitative biomarker to assess the structure of collagen fibers that may change due to damage from disease or physical injury.
An, Xu; Gong, Hongliang; Yin, Jiapeng; Wang, Xiaochun; Pan, Yanxia; Zhang, Xian; Lu, Yiliang; Yang, Yupeng; Toth, Zoltan; Schiessl, Ingo; McLoughlin, Niall; Wang, Wei
2014-01-01
Visual scenes can be readily decomposed into a variety of oriented components, the processing of which is vital for object segregation and recognition. In primate V1 and V2, most neurons have small spatio-temporal receptive fields responding selectively to oriented luminance contours (first order), while only a subgroup of neurons signal non-luminance defined contours (second order). So how is the orientation of second-order contours represented at the population level in macaque V1 and V2? Here we compared the population responses in macaque V1 and V2 to two types of second-order contour stimuli generated either by modulation of contrast or phase reversal with those to first-order contour stimuli. Using intrinsic signal optical imaging, we found that the orientation of second-order contour stimuli was represented invariantly in the orientation columns of both macaque V1 and V2. A physiologically constrained spatio-temporal energy model of V1 and V2 neuronal populations could reproduce all the recorded population responses. These findings suggest that, at the population level, the primate early visual system processes the orientation of second-order contours initially through a linear spatio-temporal filter mechanism. Our results of population responses to different second-order contour stimuli support the idea that the orientation maps in primate V1 and V2 can be described as a spatial-temporal energy map. PMID:25188576
Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes
Yebes, J. Javier; Bergasa, Luis M.; García-Garrido, Miguel Ángel
2015-01-01
Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM. PMID:25903553
Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes.
Yebes, J Javier; Bergasa, Luis M; García-Garrido, Miguel Ángel
2015-04-20
Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM.
Teaching Adaptability of Object-Oriented Programming Language Curriculum
ERIC Educational Resources Information Center
Zhu, Xiao-dong
2012-01-01
The evolution of object-oriented programming languages includes update of their own versions, update of development environments, and reform of new languages upon old languages. In this paper, the evolution analysis of object-oriented programming languages is presented in term of the characters and development. The notion of adaptive teaching upon…
Object-oriented design for accelerator control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stok, P.D.V. van der; Berk, F. van den; Deckers, R.
1994-02-01
An object-oriented design for the distributed computer control system of the accelerator ring EUTERPE is presented. Because of the experimental nature of the ring, flexibility is of the utmost importance. The object-oriented principles have contributed considerably to the flexibility of the design incorporating multiple views, multi-level access and distributed surveillance.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-29
... DEPARTMENT OF STATE [Public Notice: 7277] Culturally Significant Objects Imported for Exhibition Determinations: ``The Orient Expressed: Japan's Influence on Western Art, 1854-1918'' SUMMARY: Notice is hereby... hereby determine that the objects to be included in the exhibition ``The Orient Expressed: Japan's...
Object-Oriented Dynamic Bayesian Network-Templates for Modelling Mechatronic Systems
2002-05-04
daimlerchrysler.com Abstract are widespread. For modelling mechanical systems The object-oriented paradigma is a new but proven technol- ADAMS [31 or...hardware (sub-)systems. On the Software side thermal flow or hydraulics, see Figure 1. It also contains a the object-oriented paradigma is by now (at
Initiating Formal Requirements Specifications with Object-Oriented Models
NASA Technical Reports Server (NTRS)
Ampo, Yoko; Lutz, Robyn R.
1994-01-01
This paper reports results of an investigation into the suitability of object-oriented models as an initial step in developing formal specifications. The requirements for two critical system-level software modules were used as target applications. It was found that creating object-oriented diagrams prior to formally specifying the requirements enhanced the accuracy of the initial formal specifications and reduced the effort required to produce them. However, the formal specifications incorporated some information not found in the object-oriented diagrams, such as higher-level strategy or goals of the software.
Improved ultrasonic TV images achieved by use of Lamb-wave orientation technique
NASA Technical Reports Server (NTRS)
Berger, H.
1967-01-01
Lamb-wave sample orientation technique minimizes the interference from standing waves in continuous wave ultrasonic television imaging techniques used with thin metallic samples. The sample under investigation is oriented such that the wave incident upon it is not normal, but slightly angled.
Applications of Panoramic Images: from 720° Panorama to Interior 3d Models of Augmented Reality
NASA Astrophysics Data System (ADS)
Lee, I.-C.; Tsai, F.
2015-05-01
A series of panoramic images are usually used to generate a 720° panorama image. Although panoramic images are typically used for establishing tour guiding systems, in this research, we demonstrate the potential of using panoramic images acquired from multiple sites to create not only 720° panorama, but also three-dimensional (3D) point clouds and 3D indoor models. Since 3D modeling is one of the goals of this research, the location of the panoramic sites needed to be carefully planned in order to maintain a robust result for close-range photogrammetry. After the images are acquired, panoramic images are processed into 720° panoramas, and these panoramas which can be used directly as panorama guiding systems or other applications. In addition to these straightforward applications, interior orientation parameters can also be estimated while generating 720° panorama. These parameters are focal length, principle point, and lens radial distortion. The panoramic images can then be processed with closerange photogrammetry procedures to extract the exterior orientation parameters and generate 3D point clouds. In this research, VisaulSFM, a structure from motion software is used to estimate the exterior orientation, and CMVS toolkit is used to generate 3D point clouds. Next, the 3D point clouds are used as references to create building interior models. In this research, Trimble Sketchup was used to build the model, and the 3D point cloud was added to the determining of locations of building objects using plane finding procedure. In the texturing process, the panorama images are used as the data source for creating model textures. This 3D indoor model was used as an Augmented Reality model replacing a guide map or a floor plan commonly used in an on-line touring guide system. The 3D indoor model generating procedure has been utilized in two research projects: a cultural heritage site at Kinmen, and Taipei Main Station pedestrian zone guidance and navigation system. The results presented in this paper demonstrate the potential of using panoramic images to generate 3D point clouds and 3D models. However, it is currently a manual and labor-intensive process. A research is being carried out to Increase the degree of automation of these procedures.
NASA Astrophysics Data System (ADS)
Jalbuena, Rey L.; Peralta, Rudolph V.; Tamondong, Ayin M.
2016-10-01
Mangroves are trees or shrubs that grows at the surface between the land and the sea in tropical and sub-tropical latitudes. Mangroves are essential in supporting various marine life, thus, it is important to preserve and manage these areas. There are many approaches in creating Mangroves maps, one of which is through the use of Light Detection and Ranging (LiDAR). It is a remote sensing technique which uses light pulses to measure distances and to generate three-dimensional point clouds of the Earth's surface. In this study, the topographic LiDAR Data will be used to analyze the geophysical features of the terrain and create a Mangrove map. The dataset that we have were first pre-processed using the LAStools software. It is a software that is used to process LiDAR data sets and create different layers such as DSM, DTM, nDSM, Slope, LiDAR Intensity, LiDAR number of first returns, and CHM. All the aforementioned layers together was used to derive the Mangrove class. Then, an Object-based Image Analysis (OBIA) was performed using eCognition. OBIA analyzes a group of pixels with similar properties called objects, as compared to the traditional pixel-based which only examines a single pixel. Multi-threshold and multiresolution segmentation were used to delineate the different classes and split the image into objects. There are four levels of classification, first is the separation of the Land from the Water. Then the Land class was further dived into Ground and Non-ground objects. Furthermore classification of Nonvegetation, Mangroves, and Other Vegetation was done from the Non-ground objects. Lastly Separation of the mangrove class was done through the Use of field verified training points which was then run into a Support Vector Machine (SVM) classification. Different classes were separated using the different layer feature properties, such as mean, mode, standard deviation, geometrical properties, neighbor-related properties, and textural properties. Accuracy assessment was done using a different set of field validation points. This workflow was applied in the classification of Mangroves to a LiDAR dataset of Naawan and Manticao, Misamis Oriental, Philippines. The process presented in this study shows that LiDAR data and its derivatives can be used in extracting and creating Mangrove maps, which can be helpful in managing coastal environment.
Representing object oriented specifications and designs with extended data flow notations
NASA Technical Reports Server (NTRS)
Buser, Jon Franklin; Ward, Paul T.
1988-01-01
The issue of using extended data flow notations to document object oriented designs and specifications is discussed. Extended data flow notations, for the purposes here, refer to notations that are based on the rules of Yourdon/DeMarco data flow analysis. The extensions include additional notation for representing real-time systems as well as some proposed extensions specific to object oriented development. Some advantages of data flow notations are stated. How data flow diagrams are used to represent software objects are investigated. Some problem areas with regard to using data flow notations for object oriented development are noted. Some initial solutions to these problems are proposed.
Real-time endoscopic image orientation correction system using an accelerometer and gyrosensor.
Lee, Hyung-Chul; Jung, Chul-Woo; Kim, Hee Chan
2017-01-01
The discrepancy between spatial orientations of an endoscopic image and a physician's working environment can make it difficult to interpret endoscopic images. In this study, we developed and evaluated a device that corrects the endoscopic image orientation using an accelerometer and gyrosensor. The acceleration of gravity and angular velocity were retrieved from the accelerometer and gyrosensor attached to the handle of the endoscope. The rotational angle of the endoscope handle was calculated using a Kalman filter with transmission delay compensation. Technical evaluation of the orientation correction system was performed using a camera by comparing the optical rotational angle from the captured image with the rotational angle calculated from the sensor outputs. For the clinical utility test, fifteen anesthesiology residents performed a video endoscopic examination of an airway model with and without using the orientation correction system. The participants reported numbers written on papers placed at the left main, right main, and right upper bronchi of the airway model. The correctness and the total time it took participants to report the numbers were recorded. During the technical evaluation, errors in the calculated rotational angle were less than 5 degrees. In the clinical utility test, there was a significant time reduction when using the orientation correction system compared with not using the system (median, 52 vs. 76 seconds; P = .012). In this study, we developed a real-time endoscopic image orientation correction system, which significantly improved physician performance during a video endoscopic exam.
Unified modeling language and design of a case-based retrieval system in medical imaging.
LeBozec, C; Jaulent, M C; Zapletal, E; Degoulet, P
1998-01-01
One goal of artificial intelligence research into case-based reasoning (CBR) systems is to develop approaches for designing useful and practical interactive case-based environments. Explaining each step of the design of the case-base and of the retrieval process is critical for the application of case-based systems to the real world. We describe herein our approach to the design of IDEM--Images and Diagnosis from Examples in Medicine--a medical image case-based retrieval system for pathologists. Our approach is based on the expressiveness of an object-oriented modeling language standard: the Unified Modeling Language (UML). We created a set of diagrams in UML notation illustrating the steps of the CBR methodology we used. The key aspect of this approach was selecting the relevant objects of the system according to user requirements and making visualization of cases and of the components of the case retrieval process. Further evaluation of the expressiveness of the design document is required but UML seems to be a promising formalism, improving the communication between the developers and users.
Interactive Scene Analysis Module - A sensor-database fusion system for telerobotic environments
NASA Technical Reports Server (NTRS)
Cooper, Eric G.; Vazquez, Sixto L.; Goode, Plesent W.
1992-01-01
Accomplishing a task with telerobotics typically involves a combination of operator control/supervision and a 'script' of preprogrammed commands. These commands usually assume that the location of various objects in the task space conform to some internal representation (database) of that task space. The ability to quickly and accurately verify the task environment against the internal database would improve the robustness of these preprogrammed commands. In addition, the on-line initialization and maintenance of a task space database is difficult for operators using Cartesian coordinates alone. This paper describes the Interactive Scene' Analysis Module (ISAM) developed to provide taskspace database initialization and verification utilizing 3-D graphic overlay modelling, video imaging, and laser radar based range imaging. Through the fusion of taskspace database information and image sensor data, a verifiable taskspace model is generated providing location and orientation data for objects in a task space. This paper also describes applications of the ISAM in the Intelligent Systems Research Laboratory (ISRL) at NASA Langley Research Center, and discusses its performance relative to representation accuracy and operator interface efficiency.
Neural Network Based Sensory Fusion for Landmark Detection
NASA Technical Reports Server (NTRS)
Kumbla, Kishan -K.; Akbarzadeh, Mohammad R.
1997-01-01
NASA is planning to send numerous unmanned planetary missions to explore the space. This requires autonomous robotic vehicles which can navigate in an unstructured, unknown, and uncertain environment. Landmark based navigation is a new area of research which differs from the traditional goal-oriented navigation, where a mobile robot starts from an initial point and reaches a destination in accordance with a pre-planned path. The landmark based navigation has the advantage of allowing the robot to find its way without communication with the mission control station and without exact knowledge of its coordinates. Current algorithms based on landmark navigation however pose several constraints. First, they require large memories to store the images. Second, the task of comparing the images using traditional methods is computationally intensive and consequently real-time implementation is difficult. The method proposed here consists of three stages, First stage utilizes a heuristic-based algorithm to identify significant objects. The second stage utilizes a neural network (NN) to efficiently classify images of the identified objects. The third stage combines distance information with the classification results of neural networks for efficient and intelligent navigation.
NASA Astrophysics Data System (ADS)
Yang, Xiucheng; Chen, Li
2017-04-01
Urban surface water is characterized by complex surface continents and small size of water bodies, and the mapping of urban surface water is currently a challenging task. The moderate-resolution remote sensing satellites provide effective ways of monitoring surface water. This study conducts an exploratory evaluation on the performance of the newly available Sentinel-2A multispectral instrument (MSI) imagery for detecting urban surface water. An automatic framework that integrates pixel-level threshold adjustment and object-oriented segmentation is proposed. Based on the automated workflow, different combinations of visible, near infrared, and short-wave infrared bands in Sentinel-2 image via different water indices are first compared. Results show that object-level modified normalized difference water index (MNDWI with band 11) and automated water extraction index are feasible in urban surface water mapping for Sentinel-2 MSI imagery. Moreover, comparative results are obtained utilizing optimal MNDWI from Sentinel-2 and Landsat 8 images, respectively. Consequently, Sentinel-2 MSI achieves the kappa coefficient of 0.92, compared with that of 0.83 from Landsat 8 operational land imager.
Design and implementation of a biomedical image database (BDIM).
Aubry, F; Badaoui, S; Kaplan, H; Di Paola, R
1988-01-01
We developed a biomedical image database (BDIM) which proposes a standardized representation of value arrays such as images and curves, and of their associated parameters, independently of their acquisition mode to make their transmission and processing easier. It includes three kinds of interactions, oriented to the users. The network concept was kept as a constraint to incorporate the BDIM in a distributed structure and we maintained compatibility with the ACR/NEMA communication protocol. The management of arrays and their associated parameters includes two distinct bases of objects, linked together via a gateway. The first one manages arrays according to their storage mode: long term storage on optionally on-line mass storage devices, and, for consultations, partial copies of long term stored arrays on hard disk. The second one manages the associated parameters and the gateway by means of the relational DBMS ORACLE. Parameters are grouped into relations. Some of them are in agreement with groups defined by the ACR/NEMA. The other relations describe objects resulting from processed initial objects. These new objects are not described by the ACR/NEMA but they can be inserted as shadow groups of ACR/NEMA description. The relations describing the storage and their pathname constitute the gateway. ORACLE distributed tools and the two-level storage technique will allow the integration of the BDIM into a distributed structure, Queries and array (alone or in sequences) retrieval module has access to the relations via a level in which a dictionary managed by ORACLE is included. This dictionary translates ACR/NEMA objects into objects that can be handled by the DBMS.(ABSTRACT TRUNCATED AT 250 WORDS)
Secure access control and large scale robust representation for online multimedia event detection.
Liu, Changyu; Lu, Bin; Li, Huiling
2014-01-01
We developed an online multimedia event detection (MED) system. However, there are a secure access control issue and a large scale robust representation issue when we want to integrate traditional event detection algorithms into the online environment. For the first issue, we proposed a tree proxy-based and service-oriented access control (TPSAC) model based on the traditional role based access control model. Verification experiments were conducted on the CloudSim simulation platform, and the results showed that the TPSAC model is suitable for the access control of dynamic online environments. For the second issue, inspired by the object-bank scene descriptor, we proposed a 1000-object-bank (1000OBK) event descriptor. Feature vectors of the 1000OBK were extracted from response pyramids of 1000 generic object detectors which were trained on standard annotated image datasets, such as the ImageNet dataset. A spatial bag of words tiling approach was then adopted to encode these feature vectors for bridging the gap between the objects and events. Furthermore, we performed experiments in the context of event classification on the challenging TRECVID MED 2012 dataset, and the results showed that the robust 1000OBK event descriptor outperforms the state-of-the-art approaches.
3D Modeling from Multi-views Images for Cultural Heritage in Wat-Pho, Thailand
NASA Astrophysics Data System (ADS)
Soontranon, N.; Srestasathiern, P.; Lawawirojwong, S.
2015-08-01
In Thailand, there are several types of (tangible) cultural heritages. This work focuses on 3D modeling of the heritage objects from multi-views images. The images are acquired by using a DSLR camera which costs around 1,500 (camera and lens). Comparing with a 3D laser scanner, the camera is cheaper and lighter than the 3D scanner. Hence, the camera is available for public users and convenient for accessing narrow areas. The acquired images consist of various sculptures and architectures in Wat-Pho which is a Buddhist temple located behind the Grand Palace (Bangkok, Thailand). Wat-Pho is known as temple of the reclining Buddha and the birthplace of traditional Thai massage. To compute the 3D models, a diagram is separated into following steps; Data acquisition, Image matching, Image calibration and orientation, Dense matching and Point cloud processing. For the initial work, small heritages less than 3 meters height are considered for the experimental results. A set of multi-views images of an interested object is used as input data for 3D modeling. In our experiments, 3D models are obtained from MICMAC (open source) software developed by IGN, France. The output of 3D models will be represented by using standard formats of 3D point clouds and triangulated surfaces such as .ply, .off, .obj, etc. To compute for the efficient 3D models, post-processing techniques are required for the final results e.g. noise reduction, surface simplification and reconstruction. The reconstructed 3D models can be provided for public access such as website, DVD, printed materials. The high accurate 3D models can also be used as reference data of the heritage objects that must be restored due to deterioration of a lifetime, natural disasters, etc.
Orienting apples for imaging using their inertial properties and random apple loading
USDA-ARS?s Scientific Manuscript database
The inability to control apple orientation during imaging has hindered development of automated systems for sorting apples for defects such as bruises and for safety issues such as fecal contamination. Recently, a potential method for orienting apples based on their inertial properties was discovere...
Contour symmetry detection: the influence of axis orientation and number of objects.
Friedenberg, J; Bertamini, M
2000-09-01
Participants discriminated symmetrical from random contours connected by straight lines to form part of one- or two-objects. In experiment one, symmetrical contours were translated or reflected and presented at vertical, horizontal, and oblique axis orientations with orientation constant within blocks. Translated two-object contours were detected more easily than one, replicating a "lock-and-key" effect obtained previously for vertical orientations only [M. Bertamini, J.D. Friedenberg, M. Kubovy, Acta Psychologica, 95 (1997) 119-140]. A second experiment extended these results to a wider variety of axis orientations under mixed block conditions. The pattern of performance for translation and reflection at different orientations corresponded in both experiments, suggesting that orientation is processed similarly in the detection of these symmetries.
Towards a general object-oriented software development methodology
NASA Technical Reports Server (NTRS)
Seidewitz, ED; Stark, Mike
1986-01-01
Object diagrams were used to design a 5000 statement team training exercise and to design the entire dynamics simulator. The object diagrams are also being used to design another 50,000 statement Ada system and a personal computer based system that will be written in Modula II. The design methodology evolves out of these experiences as well as the limitations of other methods that were studied. Object diagrams, abstraction analysis, and associated principles provide a unified framework which encompasses concepts from Yourdin, Booch, and Cherry. This general object-oriented approach handles high level system design, possibly with concurrency, through object-oriented decomposition down to a completely functional level. How object-oriented concepts can be used in other phases of the software life-cycle, such as specification and testing is being studied concurrently.
Graf, M; Kaping, D; Bülthoff, H H
2005-03-01
How do observers recognize objects after spatial transformations? Recent neurocomputational models have proposed that object recognition is based on coordinate transformations that align memory and stimulus representations. If the recognition of a misoriented object is achieved by adjusting a coordinate system (or reference frame), then recognition should be facilitated when the object is preceded by a different object in the same orientation. In the two experiments reported here, two objects were presented in brief masked displays that were in close temporal contiguity; the objects were in either congruent or incongruent picture-plane orientations. Results showed that naming accuracy was higher for congruent than for incongruent orientations. The congruency effect was independent of superordinate category membership (Experiment 1) and was found for objects with different main axes of elongation (Experiment 2). The results indicate congruency effects for common familiar objects even when they have dissimilar shapes. These findings are compatible with models in which object recognition is achieved by an adjustment of a perceptual coordinate system.
NASA Astrophysics Data System (ADS)
Gupta, S.; Lohani, B.
2014-05-01
Mobile augmented reality system is the next generation technology to visualise 3D real world intelligently. The technology is expanding at a fast pace to upgrade the status of a smart phone to an intelligent device. The research problem identified and presented in the current work is to view actual dimensions of various objects that are captured by a smart phone in real time. The methodology proposed first establishes correspondence between LiDAR point cloud, that are stored in a server, and the image t hat is captured by a mobile. This correspondence is established using the exterior and interior orientation parameters of the mobile camera and the coordinates of LiDAR data points which lie in the viewshed of the mobile camera. A pseudo intensity image is generated using LiDAR points and their intensity. Mobile image and pseudo intensity image are then registered using image registration method SIFT thereby generating a pipeline to locate a point in point cloud corresponding to a point (pixel) on the mobile image. The second part of the method uses point cloud data for computing dimensional information corresponding to the pairs of points selected on mobile image and fetch the dimensions on top of the image. This paper describes all steps of the proposed method. The paper uses an experimental setup to mimic the mobile phone and server system and presents some initial but encouraging results
Practical implementation of channelized hotelling observers: effect of ROI size
NASA Astrophysics Data System (ADS)
Ferrero, Andrea; Favazza, Christopher P.; Yu, Lifeng; Leng, Shuai; McCollough, Cynthia H.
2017-03-01
Fundamental to the development and application of channelized Hotelling observer (CHO) models is the selection of the region of interest (ROI) to evaluate. For assessment of medical imaging systems, reducing the ROI size can be advantageous. Smaller ROIs enable a greater concentration of interrogable objects in a single phantom image, thereby providing more information from a set of images and reducing the overall image acquisition burden. Additionally, smaller ROIs may promote better assessment of clinical patient images as different patient anatomies present different ROI constraints. To this end, we investigated the minimum ROI size that does not compromise the performance of the CHO model. In this study, we evaluated both simulated images and phantom CT images to identify the minimum ROI size that resulted in an accurate figure of merit (FOM) of the CHO's performance. More specifically, the minimum ROI size was evaluated as a function of the following: number of channels, spatial frequency and number of rotations of the Gabor filters, size and contrast of the object, and magnitude of the image noise. Results demonstrate that a minimum ROI size exists below which the CHO's performance is grossly inaccurate. The minimum ROI size is shown to increase with number of channels and be dictated by truncation of lower frequency filters. We developed a model to estimate the minimum ROI size as a parameterized function of the number of orientations and spatial frequencies of the Gabor filters, providing a guide for investigators to appropriately select parameters for model observer studies.
Practical implementation of Channelized Hotelling Observers: Effect of ROI size.
Ferrero, Andrea; Favazza, Christopher P; Yu, Lifeng; Leng, Shuai; McCollough, Cynthia H
2017-03-01
Fundamental to the development and application of channelized Hotelling observer (CHO) models is the selection of the region of interest (ROI) to evaluate. For assessment of medical imaging systems, reducing the ROI size can be advantageous. Smaller ROIs enable a greater concentration of interrogable objects in a single phantom image, thereby providing more information from a set of images and reducing the overall image acquisition burden. Additionally, smaller ROIs may promote better assessment of clinical patient images as different patient anatomies present different ROI constraints. To this end, we investigated the minimum ROI size that does not compromise the performance of the CHO model. In this study, we evaluated both simulated images and phantom CT images to identify the minimum ROI size that resulted in an accurate figure of merit (FOM) of the CHO's performance. More specifically, the minimum ROI size was evaluated as a function of the following: number of channels, spatial frequency and number of rotations of the Gabor filters, size and contrast of the object, and magnitude of the image noise. Results demonstrate that a minimum ROI size exists below which the CHO's performance is grossly inaccurate. The minimum ROI size is shown to increase with number of channels and be dictated by truncation of lower frequency filters. We developed a model to estimate the minimum ROI size as a parameterized function of the number of orientations and spatial frequencies of the Gabor filters, providing a guide for investigators to appropriately select parameters for model observer studies.