Science.gov

Sample records for 3d computer vision

  1. 3-D Signal Processing in a Computer Vision System

    Treesearch

    Dongping Zhu; Richard W. Conners; Philip A. Araman

    1991-01-01

    This paper discusses the problem of 3-dimensional image filtering in a computer vision system that would locate and identify internal structural failure. In particular, a 2-dimensional adaptive filter proposed by Unser has been extended to 3-dimension. In conjunction with segmentation and labeling, the new filter has been used in the computer vision system to...

  2. CT Image Sequence Analysis for Object Recognition - A Rule-Based 3-D Computer Vision System

    Treesearch

    Dongping Zhu; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman

    1991-01-01

    Research is now underway to create a vision system for hardwood log inspection using a knowledge-based approach. In this paper, we present a rule-based, 3-D vision system for locating and identifying wood defects using topological, geometric, and statistical attributes. A number of different features can be derived from the 3-D input scenes. These features and evidence...

  3. Design and highly accurate 3D displacement characterization of monolithic SMA microgripper using computer vision

    NASA Astrophysics Data System (ADS)

    Bellouard, Yves; Sulzmann, Armin; Jacot, Jacques; Clavel, Reymond

    1998-01-01

    In the robotics field, several grippers have been developed using SMA technologies, but, so far, SMA is only used as the actuating part of the mechanical device. However mechanical device requires assembly and in some cases this means friction. In the case of micro-grippers, this becomes a major problem due to the small size of the components. In this paper, a new monolithic concept of micro-gripper is presented. This concept is applied to the grasping of sub- millimeter optical elements such as Selfoc lenses and the fastening of optical fibers. Measurements are performed using a newly developed high precision 3D-computer vision tracking system to characterize the spatial positions of the micro-gripper in action. To characterize relative motion of the micro-gripper the natural texture of the micro-gripper is used to compute 3D displacement. The microscope image CCD receivers high frequency changes in light intensity from the surface of the ripper. Using high resolution camera calibration, passive auto focus algorithms and 2D object recognition, the position of the micro-gripper can be characterized in the 3D workspace and can be guided in future micro assembly tasks.

  4. 3D vision system assessment

    NASA Astrophysics Data System (ADS)

    Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Bryan; Chenault, David B.; Kingston, David; Geulen, Vanilynmae; Newell, Scott; Pettijohn, Brad

    2009-02-01

    In this paper, we report on the development of a 3D vision system consisting of a flat panel stereoscopic display and auto-converging stereo camera and an assessment of the system's use for robotic driving, manipulation, and surveillance operations. The 3D vision system was integrated onto a Talon Robot and Operator Control Unit (OCU) such that direct comparisons of the performance of a number of test subjects using 2D and 3D vision systems were possible. A number of representative scenarios were developed to determine which tasks benefited most from the added depth perception and to understand when the 3D vision system hindered understanding of the scene. Two tests were conducted at Fort Leonard Wood, MO with noncommissioned officers ranked Staff Sergeant and Sergeant First Class. The scenarios; the test planning, approach and protocols; the data analysis; and the resulting performance assessment of the 3D vision system are reported.

  5. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals’ Behaviour

    PubMed Central

    Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola

    2016-01-01

    Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs’ behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals’ quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog’s shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non

  6. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals' Behaviour.

    PubMed

    Barnard, Shanis; Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola

    2016-01-01

    Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs' behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals' quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog's shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non

  7. UAV and Computer Vision in 3D Modeling of Cultural Heritage in Southern Italy

    NASA Astrophysics Data System (ADS)

    Barrile, Vincenzo; Gelsomino, Vincenzo; Bilotta, Giuliana

    2017-08-01

    On the Waterfront Italo Falcomatà of Reggio Calabria you can admire the most extensive tract of the walls of the Hellenistic period of ancient city of Rhegion. The so-called Greek Walls are one of the most significant and visible traces of the past linked to the culture of Ancient Greece in the site of Reggio Calabria territory. Over the years this stretch of wall has always been a part, to the reconstruction of Reggio after the earthquake of 1783, the outer walls at all times, restored countless times, to cope with the degradation of the time and the adjustments to the technical increasingly innovative and sophisticated siege. They were the subject of several studies on history, for the study of the construction techniques and the maintenance and restoration of the same. This note describes the methodology for the implementation of a three-dimensional model of the Greek Walls conducted by the Geomatics Laboratory, belonging to DICEAM Department of University “Mediterranea” of Reggio Calabria. 3D modeling we made is based on imaging techniques, such as Digital Photogrammetry and Computer Vision, by using a drone. The acquired digital images were then processed using commercial software Agisoft PhotoScan. The results denote the goodness of the technique used in the field of cultural heritage, attractive alternative to more expensive and demanding techniques such as laser scanning.

  8. Intelligent robots and computer vision XII: Active vision and 3D methods; Proceedings of the Meeting, Boston, MA, Sept. 8, 9, 1993

    SciTech Connect

    Casasent, D.P.

    1993-01-01

    Topics addressed include active vision for intelligent robots, 3D vision methods, tracking in robotic and vision, visual servoing and egomotion in robotics, egomotion and time-sequential processing, and control and planning in robotics and vision. Particular attention is given to invariant in visual motion, generic target tracking using color, recognizing 3D articulated-line-drawing objects, range data acquisition from an encoded structured light pattern, and 3D edge orientation detection. Also discussed are acquisition of randomly moving objects by visual guidance, fundamental principles of robot vision, high-performance visual servoing for robot end-point control, a long sequence analysis of human motion using eigenvector decomposition, and sequential computer algorithms for printed circuit board inspection.

  9. Analysis of 3-D images of dental imprints using computer vision

    NASA Astrophysics Data System (ADS)

    Aubin, Michele; Cote, Jean; Laurendeau, Denis; Poussart, Denis

    1992-05-01

    This paper addressed two important aspects of dental analysis: (1) location and (2) identification of the types of teeth by means of 3-D image acquisition and segmentation. The 3-D images of both maxillaries are acquired using a wax wafer as support. The interstices between teeth are detected by non-linear filtering of the 3-D and grey-level data. Two operators are presented: one for the detection of the interstices between incisors, canines, and premolars and one for those between molars. Teeth are then identified by mapping the imprint under analysis on the computer model of an 'ideal' imprint. For the mapping to be valid, a set of three reference points is detected on the imprint. Then, the points are put in correspondence with similar points on the model. Two such points are chosen based on a least-squares fit of a second-order polynomial of the 3-D data in the area of canines. This area is of particular interest since the canines show a very characteristic shape and are easily detected on the imprint. The mapping technique is described in detail in the paper as well as pre-processing of the 3-D profiles. Experimental results are presented for different imprints.

  10. Real-Time, Multiple, Pan/Tilt/Zoom, Computer Vision Tracking, and 3D Position Estimating System for Unmanned Aerial System Metrology

    DTIC Science & Technology

    2013-10-18

    2012. 136. Zhang, J., Y. Wang, J. Chen, and K. Xue. “A framework of surveillance system using a PTZ camera,” Computer Science and Information Technology...Real-Time, Multiple, Pan/Tilt/Zoom, Computer Vision Tracking, and 3D Position Estimating System for Small Unmanned Aircraft System Metrology...United States Government. AFIT-ENY-DS-13-D- Real-Time, Multiple, Pan/Tilt/Zoom, Computer Vision Tracking, and 3D Position Estimating System for Small

  11. Hyperspeed data acquisition for 3D computer vision metrology as applied to law enforcement

    NASA Astrophysics Data System (ADS)

    Altschuler, Bruce R.

    1997-02-01

    cycling at 1 millisecond, each pattern is projected and recorded in a cycle time of 1/500th second. An entire set of patterns can then be recorded within 1/60th second. This pattern set contains all the information necessary to calculate a 3-D map. The use of hyper-speed parallel video cameras in conjunction with high speed modulators enables video data rate acquisition of all data necessary to calculate numerical digital 3-D metrological surface data. Thus a 3-D video camera can operate at the rate of a conventional 2-D video camera. The speed of actual 3-D output information is a function of the speed of the computer, a parallel processor being preferred for the task. With video rate 3-D data acquisition law enforcement could survey crime scenes, obtain evidence, watch and record people, packages, suitcases, and record disaster scenes very rapidly.

  12. Microvision system (MVS): a 3D computer graphic-based microrobot telemanipulation and position feedback by vision

    NASA Astrophysics Data System (ADS)

    Sulzmann, Armin; Breguet, Jean-Marc; Jacot, Jacques

    1995-12-01

    The aim of our project is to control the position in 3D-space of a micro robot with sub micron accuracy and manipulate Microsystems aided by a real time 3D computer graphics (virtual reality). As Microsystems and micro structures become smaller, it is necessary to build a micro robot ((mu) -robot) capable of manipulating these systems and structures with a precision of 1 micrometers or even higher. These movements have to be controlled and guided. The first part of our project was to develop a real time 3D computer graphics (virtual reality) environment man-machine interface to guide the newly developed robot similar to the environment we built in a macroscopic robotics. Secondly we want to evaluate measurement techniques to verify its position in the region of interest (workspace). A new type of microrobot has been developed for our purposed. Its simple and compact design is believed to be of promise in the microrobotics field. Stepping motion allows speed up to 4 mm/s. Resolution smaller than 10 nm is achievable. We also focus on the vision system and on the virtual reality interface of the complex system. Basically the user interacts with the virtual 3D microscope and sees the (mu) -robot as if he is looking through a real microscope. He is able to simulate the assembly of the missing parts, e.g. parts of the micrometer, beforehand in order to verify the assembly manipulation steps such assembly of the missing parts, e.g. parts of a micromotor, beforehand in order to verify the assembly manipulation steps such as measuring, moving the table to the right position or performing the manipulation. Micro manipulation is form of a teleoperation is then performed by the robot-unit and the position is controlled by vision. First results have shown, that a guided manipulations with submicronics absolute accuracy can be achieved. Key idea of this approach is to use the intuitiveness of immersed vision to perform robotics tasks in an environment where human has only access

  13. Vision models for 3D surfaces

    NASA Astrophysics Data System (ADS)

    Mitra, Sunanda

    1992-11-01

    Different approaches to computational stereo to represent human stereo vision have been developed over the past two decades. The Marr-Poggio theory of human stereo vision is probably the most widely accepted model of the human stereo vision. However, recently developed motion stereo models which use a sequence of images taken by either a moving camera or a moving object provide an alternative method of achieving multi-resolution matching without the use of Laplacian of Gaussian operators. While using image sequences, the baseline between two camera positions for a image pair is changed for the subsequent image pair so as to achieve different resolution for each image pair. Having different baselines also avoids the inherent occlusion problem in stereo vision models. The advantage of using multi-resolution images acquired by camera positioned at different baselines over those acquired by LOG operators is that one does not have to encounter spurious edges often created by zero-crossings in the LOG operated images. Therefore in designing a computer vision system, a motion stereo model is more appropriate than a stereo vision model. However, in some applications where only a stereo pair of images are available, recovery of 3D surfaces of natural scenes are possible in a computationally efficient manner by using cepstrum matching and regularization techniques. Section 2 of this paper describes a motion stereo model using multi-scale cepstrum matching for the detection of disparity between image pairs in a sequence of images and subsequent recovery of 3D surfaces from depth-map obtained by a non convergent triangulation technique. Section 3 presents a 3D surface recovery technique from a stereo pair using cepstrum matching for disparity detection and cubic B-splines for surface smoothing. Section 4 contains the results of 3D surface recovery using both of the techniques mentioned above. Section 5 discusses the merit of 2D cepstrum matching and cubic B

  14. 3-D vision and range finding techniques

    NASA Astrophysics Data System (ADS)

    Monchaud, Serge

    The introduction of third generation robots in automated systems is impeded by the absence of 3-D sensors collecting panoramic range data at medium distance ( 0-10 meters) in a large volume (up to 100 m 3). The work described in the present paper offers a certain number of solutions to this general problem. Our system is built around a 2-D passive machine vision connected to various cameras (VIDICON and CCD). The host computer (HP 1000) pilots numerous sorts of range finders (acoustic and optical). The concept of multisensory range finders is introduced to allow the best use of each type (active methods). This 3-D vision has been tested in two fields of application: -in robotics for the absolute of a mobile robot; -in audiovisual for fixing objects or actors in a 3-D synthetic scene. In some cases the absolute location problem is solved with an opto-electronic remote tracking measurement system. It is the last part of our 3-D machine vision.

  15. Computer Vision Tracking Using Particle Filters for 3D Position Estimation

    DTIC Science & Technology

    2014-03-27

    5 2.2 Photogrammetry ...focus on particle filters. 2.2 Photogrammetry Photogrammetry is the process of determining 3-D coordinates through images. The mathematical underpinnings...of photogrammetry are rooted in the 1480s with Leonardo da Vinci’s study of perspectives [8, p. 1]. However, digital photogrammetry did not emerge

  16. Stereo 3-D Vision in Teaching Physics

    ERIC Educational Resources Information Center

    Zabunov, Svetoslav

    2012-01-01

    Stereo 3-D vision is a technology used to present images on a flat surface (screen, paper, etc.) and at the same time to create the notion of three-dimensional spatial perception of the viewed scene. A great number of physical processes are much better understood when viewed in stereo 3-D vision compared to standard flat 2-D presentation. The…

  17. Stereo 3-D Vision in Teaching Physics

    ERIC Educational Resources Information Center

    Zabunov, Svetoslav

    2012-01-01

    Stereo 3-D vision is a technology used to present images on a flat surface (screen, paper, etc.) and at the same time to create the notion of three-dimensional spatial perception of the viewed scene. A great number of physical processes are much better understood when viewed in stereo 3-D vision compared to standard flat 2-D presentation. The…

  18. Dental wear estimation using a digital intra-oral optical scanner and an automated 3D computer vision method.

    PubMed

    Meireles, Agnes Batista; Vieira, Antonio Wilson; Corpas, Livia; Vandenberghe, Bart; Bastos, Flavia Souza; Lambrechts, Paul; Campos, Mario Montenegro; Las Casas, Estevam Barbosa de

    2016-01-01

    The objective of this work was to propose an automated and direct process to grade tooth wear intra-orally. Eight extracted teeth were etched with acid for different times to produce wear and scanned with an intra-oral optical scanner. Computer vision algorithms were used for alignment and comparison among models. Wear volume was estimated and visual scoring was achieved to determine reliability. Results demonstrated that it is possible to directly detect submillimeter differences in teeth surfaces with an automated method with results similar to those obtained by direct visual inspection. The investigated method proved to be reliable for comparison of measurements over time.

  19. Use of 3D vision for fine robot motion

    NASA Technical Reports Server (NTRS)

    Lokshin, Anatole; Litwin, Todd

    1989-01-01

    An integration of 3-D vision systems with robot manipulators will allow robots to operate in a poorly structured environment by visually locating targets and obstacles. However, by using computer vision for objects acquisition makes the problem of overall system calibration even more difficult. Indeed, in a CAD based manipulation a control architecture has to find an accurate mapping between the 3-D Euclidean work space and a robot configuration space (joint angles). If a stereo vision is involved, then one needs to map a pair of 2-D video images directly into the robot configuration space. Neural Network approach aside, a common solution to this problem is to calibrate vision and manipulator independently, and then tie them via common mapping into the task space. In other words, both vision and robot refer to some common Absolute Euclidean Coordinate Frame via their individual mappings. This approach has two major difficulties. First a vision system has to be calibrated over the total work space. And second, the absolute frame, which is usually quite arbitrary, has to be the same with a high degree of precision for both robot and vision subsystem calibrations. The use of computer vision to allow robust fine motion manipulation in a poorly structured world which is currently in progress is described along with the preliminary results and encountered problems.

  20. The New Realm of 3-D Vision

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Dimension Technologies Inc., developed a line of 2-D/3-D Liquid Crystal Display (LCD) screens, including a 15-inch model priced at consumer levels. DTI's family of flat panel LCD displays, called the Virtual Window(TM), provide real-time 3-D images without the use of glasses, head trackers, helmets, or other viewing aids. Most of the company initial 3-D display research was funded through NASA's Small Business Innovation Research (SBIR) program. The images on DTI's displays appear to leap off the screen and hang in space. The display accepts input from computers or stereo video sources, and can be switched from 3-D to full-resolution 2-D viewing with the push of a button. The Virtual Window displays have applications in data visualization, medicine, architecture, business, real estate, entertainment, and other research, design, military, and consumer applications. Displays are currently used for computer games, protein analysis, and surgical imaging. The technology greatly benefits the medical field, as surgical simulators are helping to increase the skills of surgical residents. Virtual Window(TM) is a trademark of Dimension Technologies Inc.

  1. Towards autonomic computing in machine vision applications: techniques and strategies for in-line 3D reconstruction in harsh industrial environments

    NASA Astrophysics Data System (ADS)

    Molleda, Julio; Usamentiaga, Rubén; García, Daniel F.; Bulnes, Francisco G.

    2011-03-01

    Nowadays machine vision applications require skilled users to configure, tune, and maintain. Because such users are scarce, the robustness and reliability of applications are usually significantly affected. Autonomic computing offers a set of principles such as self-monitoring, self-regulation, and self-repair which can be used to partially overcome those problems. Systems which include self-monitoring observe their internal states, and extract features about them. Systems with self-regulation are capable of regulating their internal parameters to provide the best quality of service depending on the operational conditions and environment. Finally, self-repairing systems are able to detect anomalous working behavior and to provide strategies to deal with such conditions. Machine vision applications are the perfect field to apply autonomic computing techniques. This type of application has strong constraints on reliability and robustness, especially when working in industrial environments, and must provide accurate results even under changing conditions such as luminance, or noise. In order to exploit the autonomic approach of a machine vision application, we believe the architecture of the system must be designed using a set of orthogonal modules. In this paper, we describe how autonomic computing techniques can be applied to machine vision systems, using as an example a real application: 3D reconstruction in harsh industrial environments based on laser range finding. The application is based on modules with different responsibilities at three layers: image acquisition and processing (low level), monitoring (middle level) and supervision (high level). High level modules supervise the execution of low-level modules. Based on the information gathered by mid-level modules, they regulate low-level modules in order to optimize the global quality of service, and tune the module parameters based on operational conditions and on the environment. Regulation actions involve

  2. Fiber optic coherent laser radar 3D vision system

    SciTech Connect

    Clark, R.B.; Gallman, P.G.; Slotwinski, A.R.; Wagner, K.; Weaver, S.; Xu, Jieping

    1996-12-31

    This CLVS will provide a substantial advance in high speed computer vision performance to support robotic Environmental Management (EM) operations. This 3D system employs a compact fiber optic based scanner and operator at a 128 x 128 pixel frame at one frame per second with a range resolution of 1 mm over its 1.5 meter working range. Using acousto-optic deflectors, the scanner is completely randomly addressable. This can provide live 3D monitoring for situations where it is necessary to update once per second. This can be used for decontamination and decommissioning operations in which robotic systems are altering the scene such as in waste removal, surface scarafacing, or equipment disassembly and removal. The fiber- optic coherent laser radar based system is immune to variations in lighting, color, or surface shading, which have plagued the reliability of existing 3D vision systems, while providing substantially superior range resolution.

  3. Computational vision

    NASA Technical Reports Server (NTRS)

    Barrow, H. G.; Tenenbaum, J. M.

    1981-01-01

    The range of fundamental computational principles underlying human vision that equally apply to artificial and natural systems is surveyed. There emerges from research a view of the structuring of vision systems as a sequence of levels of representation, with the initial levels being primarily iconic (edges, regions, gradients) and the highest symbolic (surfaces, objects, scenes). Intermediate levels are constrained by information made available by preceding levels and information required by subsequent levels. In particular, it appears that physical and three-dimensional surface characteristics provide a critical transition from iconic to symbolic representations. A plausible vision system design incorporating these principles is outlined, and its key computational processes are elaborated.

  4. Computer vision

    NASA Technical Reports Server (NTRS)

    Gennery, D.; Cunningham, R.; Saund, E.; High, J.; Ruoff, C.

    1981-01-01

    The field of computer vision is surveyed and assessed, key research issues are identified, and possibilities for a future vision system are discussed. The problems of descriptions of two and three dimensional worlds are discussed. The representation of such features as texture, edges, curves, and corners are detailed. Recognition methods are described in which cross correlation coefficients are maximized or numerical values for a set of features are measured. Object tracking is discussed in terms of the robust matching algorithms that must be devised. Stereo vision, camera control and calibration, and the hardware and systems architecture are discussed.

  5. 3D vision system for intelligent milking robot automation

    NASA Astrophysics Data System (ADS)

    Akhloufi, M. A.

    2013-12-01

    In a milking robot, the correct localization and positioning of milking teat cups is of very high importance. The milking robots technology has not changed since a decade and is based primarily on laser profiles for teats approximate positions estimation. This technology has reached its limit and does not allow optimal positioning of the milking cups. Also, in the presence of occlusions, the milking robot fails to milk the cow. These problems, have economic consequences for producers and animal health (e.g. development of mastitis). To overcome the limitations of current robots, we have developed a new system based on 3D vision, capable of efficiently positioning the milking cups. A prototype of an intelligent robot system based on 3D vision for real-time positioning of a milking robot has been built and tested under various conditions on a synthetic udder model (in static and moving scenarios). Experimental tests, were performed using 3D Time-Of-Flight (TOF) and RGBD cameras. The proposed algorithms permit the online segmentation of teats by combing 2D and 3D visual information. The obtained results permit the teat 3D position computation. This information is then sent to the milking robot for teat cups positioning. The vision system has a real-time performance and monitors the optimal positioning of the cups even in the presence of motion. The obtained results, with both TOF and RGBD cameras, show the good performance of the proposed system. The best performance was obtained with RGBD cameras. This latter technology will be used in future real life experimental tests.

  6. Computer vision

    SciTech Connect

    Not Available

    1982-01-01

    This paper discusses material from areas such as artificial intelligence, psychology, computer graphics, and image processing. The intent is to assemble a selection of this material in a form that will serve both as a senior/graduate-level academic text and as a useful reference to those building vision systems. This book has a strong artificial intelligence flavour, emphasising the belief that both the intrinsic image information and the internal model of the world are important in successful vision systems. The book is organised into four parts, based on descriptions of objects at four different levels of abstraction. These are: generalised images-images and image-like entities; segmented images-images organised into subimages that are likely to correspond to interesting objects; geometric structures-quantitative models of image and world structures; relational structures-complex symbolic descriptions of image and world structures. The book contains author and subject indexes.

  7. Electrotactile vision substitution for 3D trajectory following.

    PubMed

    Chekhchoukh, A; Goumidi, M; Vuillerme, N; Payan, Y; Glade, N

    2013-01-01

    Navigation for blind persons represents a challenge for researchers in vision substitution. In this field, one of the used techniques to navigate is guidance. In this study, we develop a new approach for 3D trajectory following in which the requested task is to track a light path using computer input devices (keyboard and mouse) or a rigid body handled in front of a stereoscopic camera. The light path is visualized either on direct vision or by way of a electro-stimulation device, the Tongue Display Unit, a 12 × 12 matrix of electrodes. We improve our method by a series of experiments in which the effect of the modality of perception and that of the input device. Preliminary results indicated a close correlation between the stimulated and recorded trajectories.

  8. Evaluation of vision training using 3D play game

    NASA Astrophysics Data System (ADS)

    Kim, Jung-Ho; Kwon, Soon-Chul; Son, Kwang-Chul; Lee, Seung-Hyun

    2015-03-01

    The present study aimed to examine the effect of the vision training, which is a benefit of watching 3D video images (3D video shooting game in this study), focusing on its accommodative facility and vergence facility. Both facilities, which are the scales used to measure human visual performance, are very important factors for man in leading comfortable and easy life. This study was conducted on 30 participants in their 20s through 30s (19 males and 11 females at 24.53 ± 2.94 years), who can watch 3D video images and play 3D game. Their accommodative and vergence facility were measured before and after they watched 2D and 3D game. It turned out that their accommodative facility improved after they played both 2D and 3D games and more improved right after they played 3D game than 2D game. Likewise, their vergence facility was proved to improve after they played both 2D and 3D games and more improved soon after they played 3D game than 2D game. In addition, it was demonstrated that their accommodative facility improved to greater extent than their vergence facility. While studies have been so far conducted on the adverse effects of 3D contents, from the perspective of human factor, on the imbalance of visual accommodation and convergence, the present study is expected to broaden the applicable scope of 3D contents by utilizing the visual benefit of 3D contents for vision training.

  9. 3D Computations and Experiments

    SciTech Connect

    Couch, R; Faux, D; Goto, D; Nikkel, D

    2004-04-05

    This project consists of two activities. Task A, Simulations and Measurements, combines all the material model development and associated numerical work with the materials-oriented experimental activities. The goal of this effort is to provide an improved understanding of dynamic material properties and to provide accurate numerical representations of those properties for use in analysis codes. Task B, ALE3D Development, involves general development activities in the ALE3D code with the focus of improving simulation capabilities for problems of mutual interest to DoD and DOE. Emphasis is on problems involving multi-phase flow, blast loading of structures and system safety/vulnerability studies.

  10. 3D gaze tracking system for NVidia 3D Vision®.

    PubMed

    Wibirama, Sunu; Hamamoto, Kazuhiko

    2013-01-01

    Inappropriate parallax setting in stereoscopic content generally causes visual fatigue and visual discomfort. To optimize three dimensional (3D) effects in stereoscopic content by taking into account health issue, understanding how user gazes at 3D direction in virtual space is currently an important research topic. In this paper, we report the study of developing a novel 3D gaze tracking system for Nvidia 3D Vision(®) to be used in desktop stereoscopic display. We suggest an optimized geometric method to accurately measure the position of virtual 3D object. Our experimental result shows that the proposed system achieved better accuracy compared to conventional geometric method by average errors 0.83 cm, 0.87 cm, and 1.06 cm in X, Y, and Z dimensions, respectively.

  11. 3D vision assisted flexible robotic assembly of machine components

    NASA Astrophysics Data System (ADS)

    Ogun, Philips S.; Usman, Zahid; Dharmaraj, Karthick; Jackson, Michael R.

    2015-12-01

    Robotic assembly systems either make use of expensive fixtures to hold components in predefined locations, or the poses of the components are determined using various machine vision techniques. Vision-guided assembly robots can handle subtle variations in geometries and poses of parts. Therefore, they provide greater flexibility than the use of fixtures. However, the currently established vision-guided assembly systems use 2D vision, which is limited to three degrees of freedom. The work reported in this paper is focused on flexible automated assembly of clearance fit machine components using 3D vision. The recognition and the estimation of the poses of the components are achieved by matching their CAD models with the acquired point cloud data of the scene. Experimental results obtained from a robot demonstrating the assembly of a set of rings on a shaft show that the developed system is not only reliable and accurate, but also fast enough for industrial deployment.

  12. Glass Vision 3D: Digital Discovery for the Deaf

    ERIC Educational Resources Information Center

    Parton, Becky Sue

    2017-01-01

    Glass Vision 3D was a grant-funded project focused on developing and researching a Google Glass app that would allowed young Deaf children to look at the QR code of an object in the classroom and see an augmented reality projection that displays an American Sign Language (ASL) related video. Twenty five objects and videos were prepared and tested…

  13. 3D vision upgrade kit for the TALON robot system

    NASA Astrophysics Data System (ADS)

    Bodenhamer, Andrew; Pettijohn, Bradley; Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Chenault, David; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Kingston, David; Newell, Scott

    2010-02-01

    In September 2009 the Fort Leonard Wood Field Element of the US Army Research Laboratory - Human Research and Engineering Directorate, in conjunction with Polaris Sensor Technologies and Concurrent Technologies Corporation, evaluated the objective performance benefits of Polaris' 3D vision upgrade kit for the TALON small unmanned ground vehicle (SUGV). This upgrade kit is a field-upgradable set of two stereo-cameras and a flat panel display, using only standard hardware, data and electrical connections existing on the TALON robot. Using both the 3D vision system and a standard 2D camera and display, ten active-duty Army Soldiers completed seven scenarios designed to be representative of missions performed by military SUGV operators. Mission time savings (6.5% to 32%) were found for six of the seven scenarios when using the 3D vision system. Operators were not only able to complete tasks quicker but, for six of seven scenarios, made fewer mistakes in their task execution. Subjective Soldier feedback was overwhelmingly in support of pursuing 3D vision systems, such as the one evaluated, for fielding to combat units.

  14. Enhanced operator perception through 3D vision and haptic feedback

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Light, Kenneth; Bodenhamer, Andrew; Bosscher, Paul; Wilkinson, Loren

    2012-06-01

    Polaris Sensor Technologies (PST) has developed a stereo vision upgrade kit for TALON® robot systems comprised of a replacement gripper camera and a replacement mast zoom camera on the robot, and a replacement display in the Operator Control Unit (OCU). Harris Corporation has developed a haptic manipulation upgrade for TALON® robot systems comprised of a replacement arm and gripper and an OCU that provides haptic (force) feedback. PST and Harris have recently collaborated to integrate the 3D vision system with the haptic manipulation system. In multiple studies done at Fort Leonard Wood, Missouri it has been shown that 3D vision and haptics provide more intuitive perception of complicated scenery and improved robot arm control, allowing for improved mission performance and the potential for reduced time on target. This paper discusses the potential benefits of these enhancements to robotic systems used for the domestic homeland security mission.

  15. (Computer vision and robotics)

    SciTech Connect

    Jones, J.P.

    1989-02-13

    The traveler attended the Fourth Aalborg International Symposium on Computer Vision at Aalborg University, Aalborg, Denmark. The traveler presented three invited lectures entitled, Concurrent Computer Vision on a Hypercube Multicomputer'', The Butterfly Accumulator and its Application in Concurrent Computer Vision on Hypercube Multicomputers'', and Concurrency in Mobile Robotics at ORNL'', and a ten-minute editorial entitled, It Concurrency an Issue in Computer Vision.'' The traveler obtained information on current R D efforts elsewhere in concurrent computer vision.

  16. Improving automated 3D reconstruction methods via vision metrology

    NASA Astrophysics Data System (ADS)

    Toschi, Isabella; Nocerino, Erica; Hess, Mona; Menna, Fabio; Sargeant, Ben; MacDonald, Lindsay; Remondino, Fabio; Robson, Stuart

    2015-05-01

    This paper aims to provide a procedure for improving automated 3D reconstruction methods via vision metrology. The 3D reconstruction problem is generally addressed using two different approaches. On the one hand, vision metrology (VM) systems try to accurately derive 3D coordinates of few sparse object points for industrial measurement and inspection applications; on the other, recent dense image matching (DIM) algorithms are designed to produce dense point clouds for surface representations and analyses. This paper strives to demonstrate a step towards narrowing the gap between traditional VM and DIM approaches. Efforts are therefore intended to (i) test the metric performance of the automated photogrammetric 3D reconstruction procedure, (ii) enhance the accuracy of the final results and (iii) obtain statistical indicators of the quality achieved in the orientation step. VM tools are exploited to integrate their main functionalities (centroid measurement, photogrammetric network adjustment, precision assessment, etc.) into the pipeline of 3D dense reconstruction. Finally, geometric analyses and accuracy evaluations are performed on the raw output of the matching (i.e. the point clouds) by adopting a metrological approach. The latter is based on the use of known geometric shapes and quality parameters derived from VDI/VDE guidelines. Tests are carried out by imaging the calibrated Portable Metric Test Object, designed and built at University College London (UCL), UK. It allows assessment of the performance of the image orientation and matching procedures within a typical industrial scenario, characterised by poor texture and known 3D/2D shapes.

  17. 2D/3D Synthetic Vision Navigation Display

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, J. J., III; Bailey, Randall E.; Sweeters, jason L.

    2008-01-01

    Flight-deck display software was designed and developed at NASA Langley Research Center to provide two-dimensional (2D) and three-dimensional (3D) terrain, obstacle, and flight-path perspectives on a single navigation display. The objective was to optimize the presentation of synthetic vision (SV) system technology that permits pilots to view multiple perspectives of flight-deck display symbology and 3D terrain information. Research was conducted to evaluate the efficacy of the concept. The concept has numerous unique implementation features that would permit enhanced operational concepts and efficiencies in both current and future aircraft.

  18. 3D vision upgrade kit for TALON robot

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Pezzaniti, J. Larry; Chenault, David B.; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Pettijohn, Brad

    2010-04-01

    In this paper, we report on the development of a 3D vision field upgrade kit for TALON robot consisting of a replacement flat panel stereoscopic display, and multiple stereo camera systems. An assessment of the system's use for robotic driving, manipulation, and surveillance operations was conducted. The 3D vision system was integrated onto a TALON IV Robot and Operator Control Unit (OCU) such that stock components could be electrically disconnected and removed, and upgrade components coupled directly to the mounting and electrical connections. A replacement display, replacement mast camera with zoom, auto-focus, and variable convergence, and a replacement gripper camera with fixed focus and zoom comprise the upgrade kit. The stereo mast camera allows for improved driving and situational awareness as well as scene survey. The stereo gripper camera allows for improved manipulation in typical TALON missions.

  19. The 3D laser radar vision processor system

    NASA Technical Reports Server (NTRS)

    Sebok, T. M.

    1990-01-01

    Loral Defense Systems (LDS) developed a 3D Laser Radar Vision Processor system capable of detecting, classifying, and identifying small mobile targets as well as larger fixed targets using three dimensional laser radar imagery for use with a robotic type system. This processor system is designed to interface with the NASA Johnson Space Center in-house Extra Vehicular Activity (EVA) Retriever robot program and provide to it needed information so it can fetch and grasp targets in a space-type scenario.

  20. Panoramic 3d Vision on the ExoMars Rover

    NASA Astrophysics Data System (ADS)

    Paar, G.; Griffiths, A. D.; Barnes, D. P.; Coates, A. J.; Jaumann, R.; Oberst, J.; Gao, Y.; Ellery, A.; Li, R.

    .r.t. fields of view, ranging capability (distance measurement capability), data rate, necessity of calibration targets, hardware & data interfaces to other subsystems (e.g. navigation) as well as accuracy impacts of sensor design and compression ratio. • Geometric Calibration: The geometric properties of the individual cameras including various spectral filters, their mutual relations and the dynamic geometrical relation between rover frame and cameras - with the mast in between - are precisely described by a calibration process. During surface operations these relations will be continuously checked and updated by photogrammetric means, environmental influences such as temperature, pressure and the Mars gravity will be taken into account. • Surface Mapping: Stereo imaging using the WAC stereo pair is used for the 3d reconstruction of the rover vicinity to identify, locate and characterize potentially interesting spots (3-10 for an experimental cycle to be performed within approx. 10-30 sols). The HRC is used for high resolution imagery of these regions of interest to be overlaid on the 3d reconstruction and potentially refined by shape-from-shading techniques. A quick processing result is crucial for time critical operations planning, therefore emphasis is laid on the automatic behaviour and intrinsic error detection mechanisms. The mapping results will be continuously fused, updated and synchronized with the map used by the navigation system. The surface representation needs to take into account the different resolutions of HRC and WAC as well as uncommon or even unexpected image acquisition modes such as long range, wide baseline stereo from different rover positions or escape strategies in the case of loss of one of the stereo camera heads. • Panorama Mosaicking: The production of a high resolution stereoscopic panorama nowadays is state-of-art in computer vision. However, certain 2 challenges such as the need for access to accurate spherical coordinates, maintenance

  1. International computer vision directory

    SciTech Connect

    Flora, P.C.

    1986-01-01

    This book contains information on: computerized automation technologies. State-of-the-art computer vision systems for many areas of industrial use are covered. Other topics discussed include the following automated inspection systems; robot/vision systems; vision process control; cameras (vidicon and solid state); vision peripherals and components; and pattern processor.

  2. Intraoperative 3D Computed Tomography: Spine Surgery.

    PubMed

    Adamczak, Stephanie E; Bova, Frank J; Hoh, Daniel J

    2017-10-01

    Spinal instrumentation often involves placing implants without direct visualization of their trajectory or proximity to adjacent neurovascular structures. Two-dimensional fluoroscopy is commonly used to navigate implant placement, but with the advent of computed tomography, followed by the invention of a mobile scanner with an open gantry, three-dimensional (3D) navigation is now widely used. This article critically appraises the available literature to assess the influence of 3D navigation on radiation exposure, accuracy of instrumentation, operative time, and patient outcomes. Also explored is the latest technological advance in 3D neuronavigation: the manufacturing of, via 3D printers, patient-specific templates that direct implant placement. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. CAD-based 3D object representation for robot vision

    SciTech Connect

    Bhanu, B.; Ho, C.C.

    1987-08-01

    This article explains that most existing vision systems rely on models generated in an ad hoc manner and have no explicit relation to the CAD/CAM system originally used to design and manufacture these objects. The authors desire a more unified system that allows vision models to be automatically generated from an existing CAD database. A CAD system contains an interactive design interface, graphic display utilities, model analysis tools, automatic manufacturing interfaces, etc. Although it is a suitable environment for design purposes, its representations and the models it generates do not contain all the features that are important in robot vision applications. In this article, the authors propose a CAD-based approach for building representations and models that can be used in diverse applications involving 3D object recognition and manipulation. There are two main steps in using this approach. First, they design the object's geometry using a CAD system, or extract its CAD model from the existing database if it has already been modeled. Second, they develop representations from the CAD model and construct features possibly by combining multiple representations that are crucial in 3D object recognition and manipulation.

  4. Fiber optic coherent laser radar 3d vision system

    SciTech Connect

    Sebastian, R.L.; Clark, R.B.; Simonson, D.L.

    1994-12-31

    Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic of coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system.

  5. Computational approaches to vision

    NASA Technical Reports Server (NTRS)

    Barrow, H. G.; Tenenbaum, J. M.

    1986-01-01

    Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.

  6. Computational approaches to vision

    NASA Technical Reports Server (NTRS)

    Barrow, H. G.; Tenenbaum, J. M.

    1986-01-01

    Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.

  7. The 3-D vision system integrated dexterous hand

    NASA Technical Reports Server (NTRS)

    Luo, Ren C.; Han, Youn-Sik

    1989-01-01

    Most multifingered hands use a tendon mechanism to minimize the size and weight of the hand. Such tendon mechanisms suffer from the problems of striction and friction of the tendons resulting in a reduction of control accuracy. A design for a 3-D vision system integrated dexterous hand with motor control is described which overcomes these problems. The proposed hand is composed of three three-jointed grasping fingers with tactile sensors on their tips, a two-jointed eye finger with a cross-shaped laser beam emitting diode in its distal part. The two non-grasping fingers allow 3-D vision capability and can rotate around the hand to see and measure the sides of grasped objects and the task environment. An algorithm that determines the range and local orientation of the contact surface using a cross-shaped laser beam is introduced along with some potential applications. An efficient method for finger force calculation is presented which uses the measured contact surface normals of an object.

  8. 3-D measuring of engine camshaft based on machine vision

    NASA Astrophysics Data System (ADS)

    Qiu, Jianxin; Tan, Liang; Xu, Xiaodong

    2008-12-01

    The non-touch 3D measuring based on machine vision is introduced into camshaft precise measuring. Currently, because CCD 3-dimensional measuring can't meet requirements for camshaft's measuring precision, it's necessary to improve its measuring precision. In this paper, we put forward a method to improve the measuring method. A Multi-Character Match method based on the Polygonal Non-regular model is advanced with the theory of Corner Extraction and Corner Matching .This method has solved the problem of the matching difficulty and a low precision. In the measuring process, the use of the Coded marked Point method and Self-Character Match method can bring on this problem. The 3D measuring experiment on camshaft, which based on the Multi-Character Match method of the Polygonal Non-regular model, proves that the normal average measuring precision is increased to a new level less than 0.04mm in the point-clouds photo merge. This measuring method can effectively increase the 3D measuring precision of the binocular CCD.

  9. Computer Vision Assisted Virtual Reality Calibration

    NASA Technical Reports Server (NTRS)

    Kim, W.

    1999-01-01

    A computer vision assisted semi-automatic virtual reality (VR) calibration technology has been developed that can accurately match a virtual environment of graphically simulated three-dimensional (3-D) models to the video images of the real task environment.

  10. Computer Vision Assisted Virtual Reality Calibration

    NASA Technical Reports Server (NTRS)

    Kim, W.

    1999-01-01

    A computer vision assisted semi-automatic virtual reality (VR) calibration technology has been developed that can accurately match a virtual environment of graphically simulated three-dimensional (3-D) models to the video images of the real task environment.

  11. Novel computer vision algorithm for the reliable analysis of organelle morphology in whole cell 3D images--A pilot study for the quantitative evaluation of mitochondrial fragmentation in amyotrophic lateral sclerosis.

    PubMed

    Lautenschläger, Janin; Lautenschläger, Christian; Tadic, Vedrana; Süße, Herbert; Ortmann, Wolfgang; Denzler, Joachim; Stallmach, Andreas; Witte, Otto W; Grosskreutz, Julian

    2015-11-01

    The function of intact organelles, whether mitochondria, Golgi apparatus or endoplasmic reticulum (ER), relies on their proper morphological organization. It is recognized that disturbances of organelle morphology are early events in disease manifestation, but reliable and quantitative detection of organelle morphology is difficult and time-consuming. Here we present a novel computer vision algorithm for the assessment of organelle morphology in whole cell 3D images. The algorithm allows the numerical and quantitative description of organelle structures, including total number and length of segments, cell and nucleus area/volume as well as novel texture parameters like lacunarity and fractal dimension. Applying the algorithm we performed a pilot study in cultured motor neurons from transgenic G93A hSOD1 mice, a model of human familial amyotrophic lateral sclerosis. In the presence of the mutated SOD1 and upon excitotoxic treatment with kainate we demonstrate a clear fragmentation of the mitochondrial network, with an increase in the number of mitochondrial segments and a reduction in the length of mitochondria. Histogram analyses show a reduced number of tubular mitochondria and an increased number of small mitochondrial segments. The computer vision algorithm for the evaluation of organelle morphology allows an objective assessment of disease-related organelle phenotypes with greatly reduced examiner bias and will aid the evaluation of novel therapeutic strategies on a cellular level.

  12. Robotic 3D vision solder joint verification system evaluation

    SciTech Connect

    Trent, M.A.

    1992-02-01

    A comparative performance evaluation was conducted between a proprietary inspection system using intelligent 3D vision and manual visual inspection of solder joints. The purpose was to assess the compatibility and correlation of the automated system with current visual inspection criteria. The results indicated that the automated system was more accurate (> 90%) than visual inspection (60--70%) in locating and/or categorizing solder joint defects. In addition, the automated system can offer significant capabilities to characterize and monitor a soldering process by measuring physical attributes, such as solder joint volumes and wetting angles, which are not available through manual visual inspection. A more in-depth evaluation of this technology is recommended.

  13. Fast vision-based catheter 3D reconstruction.

    PubMed

    Moradi Dalvand, Mohsen; Nahavandi, Saeid; Howe, Robert D

    2016-07-21

    Continuum robots offer better maneuverability and inherent compliance and are well-suited for surgical applications as catheters, where gentle interaction with the environment is desired. However, sensing their shape and tip position is a challenge as traditional sensors can not be employed in the way they are in rigid robotic manipulators. In this paper, a high speed vision-based shape sensing algorithm for real-time 3D reconstruction of continuum robots based on the views of two arbitrary positioned cameras is presented. The algorithm is based on the closed-form analytical solution of the reconstruction of quadratic curves in 3D space from two arbitrary perspective projections. High-speed image processing algorithms are developed for the segmentation and feature extraction from the images. The proposed algorithms are experimentally validated for accuracy by measuring the tip position, length and bending and orientation angles for known circular and elliptical catheter shaped tubes. Sensitivity analysis is also carried out to evaluate the robustness of the algorithm. Experimental results demonstrate good accuracy (maximum errors of  ±0.6 mm and  ±0.5 deg), performance (200 Hz), and robustness (maximum absolute error of 1.74 mm, 3.64 deg for the added noises) of the proposed high speed algorithms.

  14. Fast vision-based catheter 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Moradi Dalvand, Mohsen; Nahavandi, Saeid; Howe, Robert D.

    2016-07-01

    Continuum robots offer better maneuverability and inherent compliance and are well-suited for surgical applications as catheters, where gentle interaction with the environment is desired. However, sensing their shape and tip position is a challenge as traditional sensors can not be employed in the way they are in rigid robotic manipulators. In this paper, a high speed vision-based shape sensing algorithm for real-time 3D reconstruction of continuum robots based on the views of two arbitrary positioned cameras is presented. The algorithm is based on the closed-form analytical solution of the reconstruction of quadratic curves in 3D space from two arbitrary perspective projections. High-speed image processing algorithms are developed for the segmentation and feature extraction from the images. The proposed algorithms are experimentally validated for accuracy by measuring the tip position, length and bending and orientation angles for known circular and elliptical catheter shaped tubes. Sensitivity analysis is also carried out to evaluate the robustness of the algorithm. Experimental results demonstrate good accuracy (maximum errors of  ±0.6 mm and  ±0.5 deg), performance (200 Hz), and robustness (maximum absolute error of 1.74 mm, 3.64 deg for the added noises) of the proposed high speed algorithms.

  15. Computer Vision Syndrome.

    PubMed

    Randolph, Susan A

    2017-07-01

    With the increased use of electronic devices with visual displays, computer vision syndrome is becoming a major public health issue. Improving the visual status of workers using computers results in greater productivity in the workplace and improved visual comfort.

  16. (Computer) Vision without Sight

    PubMed Central

    Manduchi, Roberto; Coughlan, James

    2012-01-01

    Computer vision holds great promise for helping persons with blindness or visual impairments (VI) to interpret and explore the visual world. To this end, it is worthwhile to assess the situation critically by understanding the actual needs of the VI population and which of these needs might be addressed by computer vision. This article reviews the types of assistive technology application areas that have already been developed for VI, and the possible roles that computer vision can play in facilitating these applications. We discuss how appropriate user interfaces are designed to translate the output of computer vision algorithms into information that the user can quickly and safely act upon, and how system-level characteristics affect the overall usability of an assistive technology. Finally, we conclude by highlighting a few novel and intriguing areas of application of computer vision to assistive technology. PMID:22815563

  17. Simultaneous perimeter measurement for 3D object with a binocular stereo vision measurement system

    NASA Astrophysics Data System (ADS)

    Peng, Zhao; Guo-Qiang, Ni

    2010-04-01

    A simultaneous measurement scheme for multiple three-dimensional (3D) objects' surface boundary perimeters is proposed. This scheme consists of three steps. First, a binocular stereo vision measurement system with two CCD cameras is devised to obtain the two images of the detected objects' 3D surface boundaries. Second, two geodesic active contours are applied to converge to the objects' contour edges simultaneously in the two CCD images to perform the stereo matching. Finally, the multiple spatial contours are reconstructed using the cubic B-spline curve interpolation. The true contour length of every spatial contour is computed as the true boundary perimeter of every 3D object. An experiment on the bent surface's perimeter measurement for the four 3D objects indicates that this scheme's measurement repetition error decreases to 0.7 mm.

  18. Active-Vision Control Systems for Complex Adversarial 3-D Environments

    DTIC Science & Technology

    2009-03-01

    environment. The new capabilities of autonomous sensing and control enable UAV /munition operations: in a clandestine/covert manner; in close proximity...nature, and without relying upon highly accurate 3-D models of the environment. The new capabilities of autonomous sensing and control enable UAV ...blur). While these problems are classical in computer vision and image analysis , all algorithms published so far required knowledge of the calibration

  19. Depth propagation and surface construction in 3-D vision.

    PubMed

    Georgeson, Mark A; Yates, Tim A; Schofield, Andrew J

    2009-01-01

    In stereo vision, regions with ambiguous or unspecified disparity can acquire perceived depth from unambiguous regions. This has been called stereo capture, depth interpolation or surface completion. We studied some striking induced depth effects suggesting that depth interpolation and surface completion are distinct stages of visual processing. An inducing texture (2-D Gaussian noise) had sinusoidal modulation of disparity, creating a smooth horizontal corrugation. The central region of this surface was replaced by various test patterns whose perceived corrugation was measured. When the test image was horizontal 1-D noise, shown to one eye or to both eyes without disparity, it appeared corrugated in much the same way as the disparity-modulated (DM) flanking regions. But when the test image was 2-D noise, or vertical 1-D noise, little or no depth was induced. This suggests that horizontal orientation was a key factor. For a horizontal sine-wave luminance grating, strong depth was induced, but for a square-wave grating, depth was induced only when its edges were aligned with the peaks and troughs of the DM flanking surface. These and related results suggest that disparity (or local depth) propagates along horizontal 1-D features, and then a 3-D surface is constructed from the depth samples acquired. The shape of the constructed surface can be different from the inducer, and so surface construction appears to operate on the results of a more local depth propagation process.

  20. Synthetic vision in the cockpit: 3D systems for general aviation

    NASA Astrophysics Data System (ADS)

    Hansen, Andrew J.; Rybacki, Richard M.; Smith, W. Garth

    2001-08-01

    Synthetic vision has the potential to improve safety in aviation through better pilot situational awareness and enhanced navigational guidance. The technological advances enabling synthetic vision are GPS based navigation (position and attitude) systems and efficient graphical systems for rendering 3D displays in the cockpit. A benefit for military, commercial, and general aviation platforms alike is the relentless drive to miniaturize computer subsystems. Processors, data storage, graphical and digital signal processing chips, RF circuitry, and bus architectures are at or out-pacing Moore's Law with the transition to mobile computing and embedded systems. The tandem of fundamental GPS navigation services such as the US FAA's Wide Area and Local Area Augmentation Systems (WAAS) and commercially viable mobile rendering systems puts synthetic vision well with the the technological reach of general aviation. Given the appropriate navigational inputs, low cost and power efficient graphics solutions are capable of rendering a pilot's out-the-window view into visual databases with photo-specific imagery and geo-specific elevation and feature content. Looking beyond the single airframe, proposed aviation technologies such as ADS-B would provide a communication channel for bringing traffic information on-board and into the cockpit visually via the 3D display for additional pilot awareness. This paper gives a view of current 3D graphics system capability suitable for general aviation and presents a potential road map following the current trends.

  1. Robust computational vision

    NASA Astrophysics Data System (ADS)

    Schunck, Brian G.

    1993-08-01

    This paper presents a paradigm for formulating reliable machine vision algorithms using methods from robust statistics. Machine vision is the process of estimating features from images by fitting a model to visual data. Vision research has produced an understanding of the physics and mathematics of visual processes. The fact that computer graphics programs can produce realistic renderings of artificial scenes indicates that our understanding of vision processes must be quite good. The premise of this paper is that the problem in applying computer vision in realistic scenes is not the fault of the theory of vision. We have good models for visual phenomena, but can do a better job of applying the models to images. Our understanding of vision must be used in computations that are robust to the kinds of errors that occur in visual signals. This paper argues that vision algorithms should be formulated using methods from robust regression. The nature of errors in visual signals is discussed, and a prescription for formulating robust algorithms is described. To illustrate the concepts, robust methods have been applied to several problems: surface reconstruction, dynamic stereo, image flow estimation, and edge detection.

  2. Laser Imaging Systems For Computer Vision

    NASA Astrophysics Data System (ADS)

    Vlad, Ionel V.; Ionescu-Pallas, Nicholas; Popa, Dragos; Apostol, Ileana; Vlad, Adriana; Capatina, V.

    1989-05-01

    The computer vision is becoming an essential feature of the high level artificial intelligence. Laser imaging systems act as special kind of image preprocessors/converters enlarging the access of the computer "intelligence" to the inspection, analysis and decision in new "world" : nanometric, three-dimensionals(3D), ultrafast, hostile for humans etc. Considering that the heart of the problem is the matching of the optical methods and the compu-ter software , some of the most promising interferometric,projection and diffraction systems are reviewed with discussions of our present results and of their potential in the precise 3D computer vision.

  3. How the venetian blind percept emerges from the laminar cortical dynamics of 3D vision

    PubMed Central

    Cao, Yongqiang; Grossberg, Stephen

    2014-01-01

    The 3D LAMINART model of 3D vision and figure-ground perception is used to explain and simulate a key example of the Venetian blind effect and to show how it is related to other well-known perceptual phenomena such as Panum's limiting case. The model proposes how lateral geniculate nucleus (LGN) and hierarchically organized laminar circuits in cortical areas V1, V2, and V4 interact to control processes of 3D boundary formation and surface filling-in that simulate many properties of 3D vision percepts, notably consciously seen surface percepts, which are predicted to arise when filled-in surface representations are integrated into surface-shroud resonances between visual and parietal cortex. Interactions between layers 4, 3B, and 2/3 in V1 and V2 carry out stereopsis and 3D boundary formation. Both binocular and monocular information combine to form 3D boundary and surface representations. Surface contour surface-to-boundary feedback from V2 thin stripes to V2 pale stripes combines computationally complementary boundary and surface formation properties, leading to a single consistent percept, while also eliminating redundant 3D boundaries, and triggering figure-ground perception. False binocular boundary matches are eliminated by Gestalt grouping properties during boundary formation. In particular, a disparity filter, which helps to solve the Correspondence Problem by eliminating false matches, is predicted to be realized as part of the boundary grouping process in layer 2/3 of cortical area V2. The model has been used to simulate the consciously seen 3D surface percepts in 18 psychophysical experiments. These percepts include the Venetian blind effect, Panum's limiting case, contrast variations of dichoptic masking and the correspondence problem, the effect of interocular contrast differences on stereoacuity, stereopsis with polarity-reversed stereograms, da Vinci stereopsis, and perceptual closure. These model mechanisms have also simulated properties of 3D neon

  4. Computer Assisted Cancer Device - 3D Imaging

    DTIC Science & Technology

    2006-10-01

    tomosynthesis images of the breast. iCAD has identified several sources of 3D tomosynthesis data, and has begun adapting its image analysis...collaborative relationships with major manufacturers of tomosynthesis equipment. 21. iCAD believes that tomosynthesis , a 3D breast imaging technique...purported advantages of tomosynthesis relative to conventional mammography include; improved lesion visibility, improved lesion detectability and

  5. Computer Vision Systems

    NASA Astrophysics Data System (ADS)

    Gunasekaran, Sundaram

    Food quality is of paramount consideration for all consumers, and its importance is perhaps only second to food safety. By some definition, food safety is also incorporated into the broad categorization of food quality. Hence, the need for careful and accurate evaluation of food quality is at the forefront of research and development both in the academia and industry. Among the many available methods for food quality evaluation, computer vision has proven to be the most powerful, especially for nondestructively extracting and quantifying many features that have direct relevance to food quality assessment and control. Furthermore, computer vision systems serve to rapidly evaluate the most readily observable foods quality attributes - the external characteristics such as color, shape, size, surface texture etc. In addition, it is now possible, using advanced computer vision technologies, to “see” inside a food product and/or package to examine important quality attributes ordinarily unavailable to human evaluators. With rapid advances in electronic hardware and other associated imaging technologies, the cost-effectiveness and speed of computer vision systems have greatly improved and many practical systems are already in place in the food industry.

  6. 3-D Object Recognition Using Combined Overhead And Robot Eye-In-Hand Vision System

    NASA Astrophysics Data System (ADS)

    Luc, Ren C.; Lin, Min-Hsiung

    1987-10-01

    A new approach for recognizing 3-D objects using a combined overhead and eye-in-hand vision system is presented. A novel eye-in-hand vision system using a fiber-optic image array is described. The significance of this approach is the fast and accurate recognition of 3-D object information compared to traditional stereo image processing. For the recognition of 3-D objects, the over-head vision system will take 2-D top view image and the eye-in-hand vision system will take side view images orthogonal to the top view image plane. We have developed and demonstrated a unique approach to integrate this 2-D information into a 3-D representation based on a new approach called "3-D Volumetric Descrip-tion from 2-D Orthogonal Projections". The Unimate PUMA 560 and TRAPIX 5500 real-time image processor have been used to test the success of the entire system.

  7. Demonstration of a 3D vision algorithm for space applications

    NASA Technical Reports Server (NTRS)

    Defigueiredo, Rui J. P. (Editor)

    1987-01-01

    This paper reports an extension of the MIAG algorithm for recognition and motion parameter determination of general 3-D polyhedral objects based on model matching techniques and using movement invariants as features of object representation. Results of tests conducted on the algorithm under conditions simulating space conditions are presented.

  8. New 3-D vision-sensor for shape-measurement applications

    NASA Astrophysics Data System (ADS)

    Moring, Ilkka; Myllyla, Risto A.; Honkanen, Esa; Kaisto, Ilkka P.; Kostamovaara, Juha T.; Maekynen, Anssi J.; Manninen, Markku

    1990-04-01

    In this paper we describe a new 3D-vision sensor developed in cooperation with the Technical Research Centre of Finland, the University of Oulu, and Prometrics Oy Co. The sensor is especially intended for the non-contact measurement of the shapes and dimensions of large industrial objects. It consists of a pulsed time-of-flight laser rangefinder, a target point detection system, a mechanical scanner, and a PC-based computer system. Our 3D-sensor has two operational modes: one for range image acquisition and the other for the search and measurement of single coordinate points. In the range image mode a scene is scanned and a 3D-image of the desired size is obtained. In the single point mode the sensor automatically searches for cooperative target points on the surface of an object and measures their 3D-coordinates. This mode can be used, e.g. for checking the dimensions of objects and for calibration. The results of preliminary performance tests are presented in the paper.

  9. Building a 3D scanner system based on monocular vision.

    PubMed

    Zhang, Zhiyi; Yuan, Lin

    2012-04-10

    This paper proposes a three-dimensional scanner system, which is built by using an ingenious geometric construction method based on monocular vision. The system is simple, low cost, and easy to use, and the measurement results are very precise. To build it, one web camera, one handheld linear laser, and one background calibration board are required. The experimental results show that the system is robust and effective, and the scanning precision can be satisfied for normal users.

  10. Stereo 3D vision adapter using commercial DIY goods

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Ohara, Takashi

    2009-10-01

    The conventional display can show only one screen, but it is impossible to enlarge the size of a screen, for example twice. Meanwhile the mirror supplies us with the same image but this mirror image is usually upside down. Assume that the images on an original screen and a virtual screen in the mirror are completely different and both images can be displayed independently. It would be possible to enlarge a screen area twice. This extension method enables the observers to show the virtual image plane and to enlarge a screen area twice. Although the displaying region is doubled, this virtual display could not produce 3D images. In this paper, we present an extension method using a unidirectional diffusing image screen and an improvement for displaying a 3D image using orthogonal polarized image projection.

  11. 3D Vision on Mars: Stereo processing and visualizations for NASA and ESA rover missions

    NASA Astrophysics Data System (ADS)

    Huber, Ben

    2016-07-01

    Three dimensional (3D) vision processing is an essential component of planetary rover mission planning and scientific data analysis. Standard ground vision processing products are digital terrain maps, panoramas, and virtual views of the environment. Such processing is currently developed for the PanCam instrument of ESA's ExoMars Rover mission by the PanCam 3D Vision Team under JOANNEUM RESEARCH coordination. Camera calibration, quality estimation of the expected results and the interfaces to other mission elements such as operations planning, rover navigation system and global Mars mapping are a specific focus of the current work. The main goals of the 3D Vision team in this context are: instrument design support & calibration processing: Development of 3D vision functionality Visualization: development of a 3D visualization tool for scientific data analysis. 3D reconstructions from stereo image data during the mission Support for 3D scientific exploitation to characterize the overall landscape geomorphology, processes, and the nature of the geologic record using the reconstructed 3D models. The developed processing framework PRoViP establishes an extensible framework for 3D vision processing in planetary robotic missions. Examples of processing products and capabilities are: Digital Terrain Models, Ortho images, 3D meshes, occlusion, solar illumination-, slope-, roughness-, and hazard-maps. Another important processing capability is the fusion of rover and orbiter based images with the support of multiple missions and sensors (e.g. MSL Mastcam stereo processing). For 3D visualization a tool called PRo3D has been developed to analyze and directly interpret digital outcrop models. Stereo image products derived from Mars rover data can be rendered in PRo3D, enabling the user to zoom, rotate and translate the generated 3D outcrop models. Interpretations can be digitized directly onto the 3D surface, and simple measurements of the outcrop and sedimentary features

  12. Appendage flow computations using the INS3D computer code

    NASA Astrophysics Data System (ADS)

    Ohring, Samuel

    1989-10-01

    The INS3D code, a steady state incompressible, fully 3-D Navier-Stokes solver, was applied to the computation of flow past an appendage mounted between two parallel flat plates of infinite extent at a Reynolds number of one-half million. The Baldwin-Lomax turbulence model was used to compute the eddy viscosity. The appendage consisted of a 1.5:1 elliptical nose and a NACA 0020 tail joined at maximum thickness of 0.24 chordlengths. A detailed description of the flow results covers all the major features of appendage flow and the results, for an unfilleted appendage, are in general agreement with experimental and other numerical results, except that the lateral location of the horseshoe vortex is larger than that in the experimental results. A detailed description is presented of the important trailing edge vortex. Detailed results for a second flow case, in which filleting is applied mainly to the front and side of the aforementioned appendage, show a greatly weakened horseshoe vortex but a still significant trailing edge vortex, that prevented velocity-deficit reduction in the wake, compared to the unfilleted appendage flow case. The calculations for the filleted case also exhibited an upstream instability. The plotting program PLOT3D was used to obtain color photos for flow visualization.

  13. CASTLE3D - A Computer Aided System for Labelling Archaeological Excavations in 3D

    NASA Astrophysics Data System (ADS)

    Houshiar, H.; Borrmann, D.; Elseberg, J.; Nüchter, A.; Näth, F.; Winkler, S.

    2015-08-01

    Documentation of archaeological excavation sites with conventional methods and tools such as hand drawings, measuring tape and archaeological notes is time consuming. This process is prone to human errors and the quality of the documentation depends on the qualification of the archaeologist on site. Use of modern technology and methods in 3D surveying and 3D robotics facilitate and improve this process. Computer-aided systems and databases improve the documentation quality and increase the speed of data acquisition. 3D laser scanning is the state of the art in modelling archaeological excavation sites, historical sites and even entire cities or landscapes. Modern laser scanners are capable of data acquisition of up to 1 million points per second. This provides a very detailed 3D point cloud of the environment. 3D point clouds and 3D models of an excavation site provide a better representation of the environment for the archaeologist and for documentation. The point cloud can be used both for further studies on the excavation and for the presentation of results. This paper introduces a Computer aided system for labelling archaeological excavations in 3D (CASTLE3D). Consisting of a set of tools for recording and georeferencing the 3D data from an excavation site, CASTLE3D is a novel documentation approach in industrial archaeology. It provides a 2D and 3D visualisation of the data and an easy-to-use interface that enables the archaeologist to select regions of interest and to interact with the data in both representations. The 2D visualisation and a 3D orthogonal view of the data provide cuts of the environment that resemble the traditional hand drawings. The 3D perspective view gives a realistic view of the environment. CASTLE3D is designed as an easy-to-use on-site semantic mapping tool for archaeologists. Each project contains a predefined set of semantic information that can be used to label findings in the data. Multiple regions of interest can be joined under

  14. Adaptive noise suppression technique for dense 3D point cloud reconstructions from monocular vision

    NASA Astrophysics Data System (ADS)

    Diskin, Yakov; Asari, Vijayan K.

    2012-10-01

    Mobile vision-based autonomous vehicles use video frames from multiple angles to construct a 3D model of their environment. In this paper, we present a post-processing adaptive noise suppression technique to enhance the quality of the computed 3D model. Our near real-time reconstruction algorithm uses each pair of frames to compute the disparities of tracked feature points to translate the distance a feature has traveled within the frame in pixels into real world depth values. As a result these tracked feature points are plotted to form a dense and colorful point cloud. Due to the inevitable small vibrations in the camera and the mismatches within the feature tracking algorithm, the point cloud model contains a significant amount of misplaced points appearing as noise. The proposed noise suppression technique utilizes the spatial information of each point to unify points of similar texture and color into objects while simultaneously removing noise dissociated with any nearby objects. The noise filter combines all the points of similar depth into 2D layers throughout the point cloud model. By applying erosion and dilation techniques we are able to eliminate the unwanted floating points while retaining points of larger objects. To reverse the compression process, we transform the 2D layer back into the 3D model allowing points to return to their original position without the attached noise components. We evaluate the resulting noiseless point cloud by utilizing an unmanned ground vehicle to perform obstacle avoidance tasks. The contribution of the noise suppression technique is measured by evaluating the accuracy of the 3D reconstruction.

  15. Integration of multiple-baseline color stereo vision with focus and defocus analysis for 3D shape measurement

    NASA Astrophysics Data System (ADS)

    Yuan, Ta; Subbarao, Murali

    1998-12-01

    A 3D vision system named SVIS is developed for 3D shape measurement that integrates three methods: (i) multiple- baseline, multiple-resolution Stereo Image Analysis (SIA) that uses colore image data, (ii) Image Defocus Analysis (IDA), and (iii) Image Focus Analysis (IFA). IDA and IFA are less accurate than stereo but they do not suffer from the correspondence problem associated with stereo. A rough 3D shape is first obtained using IDA and then IFA is used to obtain an improved estimate. The result is then used in SIA to solve the correspondence problem and obtain an accurate measurement of 3D shape. SIA is implemented using color images recorded at multiple-baselines. Color images provide more information than monochrome images for stereo matching. Therefore matching errors are reduced and accuracy of 3D shape is improved. Further improvements are obtained through multiple-baseline stereo analysis. First short baseline images are analyzed to obtain an initial estimate of 3D shape. In this step, stereo matching errors are low and computation is fast since a shorter baseline result in lower disparities. The initial estimate of 3D shape is used to match longer baseline stereo images. This yields more accurate estimation of 3D shape. The stereo matching step is implemented using a multiple-resolution matching approach to reduce computation. First lower resolution images are matched and the result are used in matching higher resolution images. This paper presented the algorithms and the experimental result of 3D shape measurements on SVIS for several objects. These results suggest a practical vision system for 3D shape measurement.

  16. 3D Machine Vision and Additive Manufacturing: Concurrent Product and Process Development

    NASA Astrophysics Data System (ADS)

    Ilyas, Ismet P.

    2013-06-01

    The manufacturing environment rapidly changes in turbulence fashion. Digital manufacturing (DM) plays a significant role and one of the key strategies in setting up vision and strategic planning toward the knowledge based manufacturing. An approach of combining 3D machine vision (3D-MV) and an Additive Manufacturing (AM) may finally be finding its niche in manufacturing. This paper briefly overviews the integration of the 3D machine vision and AM in concurrent product and process development, the challenges and opportunities, the implementation of the 3D-MV and AM at POLMAN Bandung in accelerating product design and process development, and discusses a direct deployment of this approach on a real case from our industrial partners that have placed this as one of the very important and strategic approach in research as well as product/prototype development. The strategic aspects and needs of this combination approach in research, design and development are main concerns of the presentation.

  17. Impact of 3D vision on mental workload and laparoscopic performance in inexperienced subjects.

    PubMed

    Gómez-Gómez, E; Carrasco-Valiente, J; Valero-Rosa, J; Campos-Hernández, J P; Anglada-Curado, F J; Carazo-Carazo, J L; Font-Ugalde, P; Requena-Tapia, M J

    2015-05-01

    To assess the effect of vision in three dimensions (3D) versus two dimensions (2D) on mental workload and laparoscopic performance during simulation-based training. A prospective, randomized crossover study on inexperienced students in operative laparoscopy was conducted. Forty-six candidates executed five standardized exercises on a pelvitrainer with both vision systems (3D and 2D). Laparoscopy performance was assessed using the total time (in seconds) and the number of failed attempts. For workload assessment, the validated NASA-TLX questionnaire was administered. 3D vision improves the performance reducing the time (3D = 1006.08 ± 315.94 vs. 2D = 1309.17 ± 300.28; P < .001) and the total number of failed attempts (3D = .84 ± 1.26 vs. 2D = 1.86 ± 1.60; P < .001). For each exercise, 3D vision also shows better performance times: "transfer objects" (P = .001), "single knot" (P < .001), "clip and cut" (P < .05), and "needle guidance" (P < .001). Besides, according to the NASA-TLX results, less mental workload is experienced with the use of 3D (P < .001). However, 3D vision was associated with greater visual impairment (P < .01) and headaches (P < .05). The incorporation of 3D systems in laparoscopic training programs would facilitate the acquisition of laparoscopic skills, because they reduce mental workload and improve the performance on inexperienced surgeons. However, some undesirable effects such as visual discomfort or headache are identified initially. Copyright © 2014 AEU. Publicado por Elsevier España, S.L.U. All rights reserved.

  18. Development of a Stereo Vision Measurement System for a 3D Three-Axial Pneumatic Parallel Mechanism Robot Arm

    PubMed Central

    Chiang, Mao-Hsiung; Lin, Hao-Ting; Hou, Chien-Lun

    2011-01-01

    In this paper, a stereo vision 3D position measurement system for a three-axial pneumatic parallel mechanism robot arm is presented. The stereo vision 3D position measurement system aims to measure the 3D trajectories of the end-effector of the robot arm. To track the end-effector of the robot arm, the circle detection algorithm is used to detect the desired target and the SAD algorithm is used to track the moving target and to search the corresponding target location along the conjugate epipolar line in the stereo pair. After camera calibration, both intrinsic and extrinsic parameters of the stereo rig can be obtained, so images can be rectified according to the camera parameters. Thus, through the epipolar rectification, the stereo matching process is reduced to a horizontal search along the conjugate epipolar line. Finally, 3D trajectories of the end-effector are computed by stereo triangulation. The experimental results show that the stereo vision 3D position measurement system proposed in this paper can successfully track and measure the fifth-order polynomial trajectory and sinusoidal trajectory of the end-effector of the three- axial pneumatic parallel mechanism robot arm. PMID:22319408

  19. Development of a stereo vision measurement system for a 3D three-axial pneumatic parallel mechanism robot arm.

    PubMed

    Chiang, Mao-Hsiung; Lin, Hao-Ting; Hou, Chien-Lun

    2011-01-01

    In this paper, a stereo vision 3D position measurement system for a three-axial pneumatic parallel mechanism robot arm is presented. The stereo vision 3D position measurement system aims to measure the 3D trajectories of the end-effector of the robot arm. To track the end-effector of the robot arm, the circle detection algorithm is used to detect the desired target and the SAD algorithm is used to track the moving target and to search the corresponding target location along the conjugate epipolar line in the stereo pair. After camera calibration, both intrinsic and extrinsic parameters of the stereo rig can be obtained, so images can be rectified according to the camera parameters. Thus, through the epipolar rectification, the stereo matching process is reduced to a horizontal search along the conjugate epipolar line. Finally, 3D trajectories of the end-effector are computed by stereo triangulation. The experimental results show that the stereo vision 3D position measurement system proposed in this paper can successfully track and measure the fifth-order polynomial trajectory and sinusoidal trajectory of the end-effector of the three- axial pneumatic parallel mechanism robot arm.

  20. Computational modeling of RNA 3D structures and interactions.

    PubMed

    Dawson, Wayne K; Bujnicki, Janusz M

    2016-04-01

    RNA molecules have key functions in cellular processes beyond being carriers of protein-coding information. These functions are often dependent on the ability to form complex three-dimensional (3D) structures. However, experimental determination of RNA 3D structures is difficult, which has prompted the development of computational methods for structure prediction from sequence. Recent progress in 3D structure modeling of RNA and emerging approaches for predicting RNA interactions with ions, ligands and proteins have been stimulated by successes in protein 3D structure modeling. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. Computing 3-D structure of rigid objects using stereo and motion

    NASA Technical Reports Server (NTRS)

    Nguyen, Thinh V.

    1987-01-01

    Work performed as a step toward an intelligent automatic machine vision system for 3-D imaging is discussed. The problem considered is the quantitative 3-D reconstruction of rigid objects. Motion and stereo are the two clues considered in this system. The system basically consists of three processes: the low level process to extract image features, the middle level process to establish the correspondence in the stereo (spatial) and motion (temporal) modalities, and the high level process to compute the 3-D coordinates of the corner points by integrating the spatial and temporal correspondences.

  2. Python and computer vision

    SciTech Connect

    Doak, J. E.; Prasad, Lakshman

    2002-01-01

    This paper discusses the use of Python in a computer vision (CV) project. We begin by providing background information on the specific approach to CV employed by the project. This includes a brief discussion of Constrained Delaunay Triangulation (CDT), the Chordal Axis Transform (CAT), shape feature extraction and syntactic characterization, and normalization of strings representing objects. (The terms 'object' and 'blob' are used interchangeably, both referring to an entity extracted from an image.) The rest of the paper focuses on the use of Python in three critical areas: (1) interactions with a MySQL database, (2) rapid prototyping of algorithms, and (3) gluing together all components of the project including existing C and C++ modules. For (l), we provide a schema definition and discuss how the various tables interact to represent objects in the database as tree structures. (2) focuses on an algorithm to create a hierarchical representation of an object, given its string representation, and an algorithm to match unknown objects against objects in a database. And finally, (3) discusses the use of Boost Python to interact with the pre-existing C and C++ code that creates the CDTs and CATS, performs shape feature extraction and syntactic characterization, and normalizes object strings. The paper concludes with a vision of the future use of Python for the CV project.

  3. Vision system for fast 3D obstacle detection via sterovision matching

    NASA Astrophysics Data System (ADS)

    Bouayed, Hichem A.; Pissaloux, Edwige E.; Abdallah, Samer M.

    2001-10-01

    Image matching is one of the fundamental problems of computer vision. Various approaches exist. They differ essentially by extracted primitives, by the best match search strategy, and by final applications. Feature based dense matching methods use such geometric primitives as raw pixels, edges, interest points, etc. Some of the correlation based matching methods involve a distance calculation. A time consuming operation. Its enhancement adds pixel complex photometric characteristics such as gradient direction, local curvature and luminosity local disparity, what increases the matching time, but they are usually very noisy. The matching method noise dependency and data volume can be reduced when improving the interest point robustness. This paper proposes to add to interest point primitive a set (vector) of simple characteristics (geometric and photometric), which are invariant to geometric plan transforms. A matching method based upon these enriched pixels and accumulation array concept is presented as well. These elements are useful for 3D obstacle detection in the ongoing project intelligent glasses, our final application. The intelligent glasses is a vision system for humanoid robot and for blind/visually impaired persons under joint development by Rouen University and Robotics Laboratory in Paris.

  4. Localization of significant 3D objects in 2D images for generic vision tasks

    NASA Astrophysics Data System (ADS)

    Mokhtari, Marielle; Bergevin, Robert

    1995-10-01

    Computer vision experiments are not very often linked to practical applications but rather deal with typical laboratory experiments under controlled conditions. For instance, most object recognition experiments are based on specific models used under limitative constraints. Our work proposes a general framework for rapidly locating significant 3D objects in 2D static images of medium to high complexity, as a prerequisite step to recognition and interpretation when no a priori knowledge of the contents of the scene is assumed. In this paper, a definition of generic objects is proposed, covering the structures that are implied in the image. Under this framework, it must be possible to locate generic objects and assign a significance figure to each one from any image fed to the system. The most significant structure in a given image becomes the focus of interest of the system determining subsequent tasks (like subsequent robot moves, image acquisitions and processing). A survey of existing strategies for locating 3D objects in 2D images is first presented and our approach is defined relative to these strategies. Perceptual grouping paradigms leading to the structural organization of the components of an image are at the core of our approach.

  5. Error analysis in stereo vision for location measurement of 3D point

    NASA Astrophysics Data System (ADS)

    Li, Yunting; Zhang, Jun; Tian, Jinwen

    2015-12-01

    Location measurement of 3D point in stereo vision is subjected to different sources of uncertainty that propagate to the final result. For current methods of error analysis, most of them are based on ideal intersection model to calculate the uncertainty region of point location via intersecting two fields of view of pixel that may produce loose bounds. Besides, only a few of sources of error such as pixel error or camera position are taken into account in the process of analysis. In this paper we present a straightforward and available method to estimate the location error that is taken most of source of error into account. We summed up and simplified all the input errors to five parameters by rotation transformation. Then we use the fast algorithm of midpoint method to deduce the mathematical relationships between target point and the parameters. Thus, the expectations and covariance matrix of 3D point location would be obtained, which can constitute the uncertainty region of point location. Afterwards, we turned back to the error propagation of the primitive input errors in the stereo system and throughout the whole analysis process from primitive input errors to localization error. Our method has the same level of computational complexity as the state-of-the-art method. Finally, extensive experiments are performed to verify the performance of our methods.

  6. A 3D terrain reconstruction method of stereo vision based quadruped robot navigation system

    NASA Astrophysics Data System (ADS)

    Ge, Zhuo; Zhu, Ying; Liang, Guanhao

    2017-01-01

    To provide 3D environment information for the quadruped robot autonomous navigation system during walking through rough terrain, based on the stereo vision, a novel 3D terrain reconstruction method is presented. In order to solve the problem that images collected by stereo sensors have large regions with similar grayscale and the problem that image matching is poor at real-time performance, watershed algorithm and fuzzy c-means clustering algorithm are combined for contour extraction. Aiming at the problem of error matching, duel constraint with region matching and pixel matching is established for matching optimization. Using the stereo matching edge pixel pairs, the 3D coordinate algorithm is estimated according to the binocular stereo vision imaging model. Experimental results show that the proposed method can yield high stereo matching ratio and reconstruct 3D scene quickly and efficiently.

  7. Obstacle avoidance using predictive vision based on a dynamic 3D world model

    NASA Astrophysics Data System (ADS)

    Benjamin, D. Paul; Lyons, Damian; Achtemichuk, Tom

    2006-10-01

    We have designed and implemented a fast predictive vision system for a mobile robot based on the principles of active vision. This vision system is part of a larger project to design a comprehensive cognitive architecture for mobile robotics. The vision system represents the robot's environment with a dynamic 3D world model based on a 3D gaming platform (Ogre3D). This world model contains a virtual copy of the robot and its environment, and outputs graphics showing what the virtual robot "sees" in the virtual world; this is what the real robot expects to see in the real world. The vision system compares this output in real time with the visual data. Any large discrepancies are flagged and sent to the robot's cognitive system, which constructs a plan for focusing on the discrepancies and resolving them, e.g. by updating the position of an object or by recognizing a new object. An object is recognized only once; thereafter its observed data are monitored for consistency with the predictions, greatly reducing the cost of scene understanding. We describe the implementation of this vision system and how the robot uses it to locate and avoid obstacles.

  8. An overview of computer vision

    NASA Technical Reports Server (NTRS)

    Gevarter, W. B.

    1982-01-01

    An overview of computer vision is provided. Image understanding and scene analysis are emphasized, and pertinent aspects of pattern recognition are treated. The basic approach to computer vision systems, the techniques utilized, applications, the current existing systems and state-of-the-art issues and research requirements, who is doing it and who is funding it, and future trends and expectations are reviewed.

  9. Laparoscopic pyeloplasty: Initial experience with 3D vision laparoscopy and articulating shears.

    PubMed

    Abou-Haidar, Hiba; Al-Qaoud, Talal; Jednak, Roman; Brzezinski, Alex; El-Sherbiny, Mohamed; Capolicchio, John-Paul

    2016-12-01

    Laparoscopic reconstructive surgery is associated with a steep learning curve related to the use of two-dimensional (2D) vision and rigid instruments. With the advent of robotic surgery, three-dimensional (3D) vision, and articulated instruments, this learning curve has been facilitated. We present a hybrid alternative to robotic surgery, using laparoscopy with 3D vision and articulated shears. To compare outcomes of children undergoing pyeloplasty using 3D laparoscopy with articulated instruments with those undergoing the same surgery using standard laparoscopy with 2D vision and rigid instruments. Medical charts of 33 consecutive patients with ureteropelvic junction obstruction who underwent laparoscopic pyeloplasty by a single surgeon from 2006 to 2013 were reviewed in a retrospective manner. The current 3D cohort was compared with the previous 2D cohort. Data on age, weight, gender, side, operative time, dimension (2D = 19 patients, 3D = 8 patients), presence of a crossing vessel, length of hospital stay, and complication rate were compared between the two groups. Articulating shears were used for pelvotomy and spatulation of the ureter in the 3D group. Statistical tests included linear regression models and chi square tests for trends using STATA software. Operative time per case was decreased by an average of 48 min in the group undergoing 3D laparoscopic pyeloplasty compared with the group undergoing 2D laparoscopic pyeloplasty (p = 0.02) (Figure). Complication rate and length of hospital stay were not significantly affected by the use of 3D laparoscopy. These favorable results are in accordance with previous literature emphasizing the importance of 3D vision in faster and more precise execution of complex surgical maneuvers. The use of flexible instruments has also helped overcome the well-described delicate step of a dismembered pyeloplasty, namely the pelvotomy and ureteral spatulation. Limitations of this study are those inherent to the

  10. Computer vision research with new imaging technology

    NASA Astrophysics Data System (ADS)

    Hou, Guangqi; Liu, Fei; Sun, Zhenan

    2015-12-01

    Light field imaging is capable of capturing dense multi-view 2D images in one snapshot, which record both intensity values and directions of rays simultaneously. As an emerging 3D device, the light field camera has been widely used in digital refocusing, depth estimation, stereoscopic display, etc. Traditional multi-view stereo (MVS) methods only perform well on strongly texture surfaces, but the depth map contains numerous holes and large ambiguities on textureless or low-textured regions. In this paper, we exploit the light field imaging technology on 3D face modeling in computer vision. Based on a 3D morphable model, we estimate the pose parameters from facial feature points. Then the depth map is estimated through the epipolar plane images (EPIs) method. At last, the high quality 3D face model is exactly recovered via the fusing strategy. We evaluate the effectiveness and robustness on face images captured by a light field camera with different poses.

  11. Efficient Computation of 3D Clipped Voronoi Diagram

    NASA Astrophysics Data System (ADS)

    Yan, Dong-Ming; Wang, Wenping; Lévy, Bruno; Liu, Yang

    The Voronoi diagram is a fundamental geometry structure widely used in various fields, especially in computer graphics and geometry computing. For a set of points in a compact 3D domain (i.e. a finite 3D volume), some Voronoi cells of their Voronoi diagram are infinite, but in practice only the parts of the cells inside the domain are needed, as when computing the centroidal Voronoi tessellation. Such a Voronoi diagram confined to a compact domain is called a clipped Voronoi diagram. We present an efficient algorithm for computing the clipped Voronoi diagram for a set of sites with respect to a compact 3D volume, assuming that the volume is represented as a tetrahedral mesh. We also describe an application of the proposed method to implementing a fast method for optimal tetrahedral mesh generation based on the centroidal Voronoi tessellation.

  12. Measurement Error with Different Computer Vision Techniques

    NASA Astrophysics Data System (ADS)

    Icasio-Hernández, O.; Curiel-Razo, Y. I.; Almaraz-Cabral, C. C.; Rojas-Ramirez, S. R.; González-Barbosa, J. J.

    2017-09-01

    The goal of this work is to offer a comparative of measurement error for different computer vision techniques for 3D reconstruction and allow a metrological discrimination based on our evaluation results. The present work implements four 3D reconstruction techniques: passive stereoscopy, active stereoscopy, shape from contour and fringe profilometry to find the measurement error and its uncertainty using different gauges. We measured several dimensional and geometric known standards. We compared the results for the techniques, average errors, standard deviations, and uncertainties obtaining a guide to identify the tolerances that each technique can achieve and choose the best.

  13. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems

    NASA Astrophysics Data System (ADS)

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-01

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.

  14. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems.

    PubMed

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-12

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.

  15. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems

    PubMed Central

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-01

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems. PMID:28079187

  16. Computational challenges of emerging novel true 3D holographic displays

    NASA Astrophysics Data System (ADS)

    Cameron, Colin D.; Pain, Douglas A.; Stanley, Maurice; Slinger, Christopher W.

    2000-11-01

    A hologram can produce all the 3D depth cues that the human visual system uses to interpret and perceive real 3D objects. As such it is arguably the ultimate display technology. Computer generated holography, in which a computer calculates a hologram that is then displayed using a highly complex modulator, combines the ultimate qualities of a traditional hologram with the dynamic capabilities of a computer display producing a true 3D real image floating in space. This technology is set to emerge over the next decade, potentially revolutionizing application areas such as virtual prototyping (CAD-CAM, CAID etc.), tactical information displays, data visualization and simulation. In this paper we focus on the computational challenges of this technology. We consider different classes of computational algorithms from true computer-generated holograms (CGH) to holographic stereograms. Each has different characteristics in terms of image qualities, computational resources required, total CGH information content, and system performance. Possible trade- offs will be discussed including reducing the parallax. The software and hardware architectures used to implement the CGH algorithms have many possible forms. Different schemes, from high performance computing architectures to graphics based cluster architectures will be discussed and compared. Assessment will be made of current and future trends looking forward to a practical dynamic CGH based 3D display.

  17. Precision calibration method for binocular vision measurement systems based on arbitrary translations and 3D-connection information

    NASA Astrophysics Data System (ADS)

    Yang, Jinghao; Jia, Zhenyuan; Liu, Wei; Fan, Chaonan; Xu, Pengtao; Wang, Fuji; Liu, Yang

    2016-10-01

    Binocular vision systems play an important role in computer vision, and high-precision system calibration is a necessary and indispensable process. In this paper, an improved calibration method for binocular stereo vision measurement systems based on arbitrary translations and 3D-connection information is proposed. First, a new method for calibrating the intrinsic parameters of binocular vision system based on two translations with an arbitrary angle difference is presented, which reduces the effect of the deviation of the motion actuator on calibration accuracy. This method is simpler and more accurate than existing active-vision calibration methods and can provide a better initial value for the determination of extrinsic parameters. Second, a 3D-connection calibration and optimization method is developed that links the information of the calibration target in different positions, further improving the accuracy of the system calibration. Calibration experiments show that the calibration error can be reduced to 0.09%, outperforming traditional methods for the experiments of this study.

  18. Computer_Vision

    SciTech Connect

    Justin Doak, LANL

    2002-10-04

    The Computer_Vision software performs object recognition using a novel multi-scale characterization and matching algorithm. To understand the multi-scale characterization and matching software, it is first necessary to understand some details of the Computer Vision (CV) Project. This project has focused on providing algorithms and software that provide an end-to-end toolset for image processing applications. At a high-level, this end-to-end toolset focuses on 7 coy steps. The first steps are geometric transformations. 1) Image Segmentation. This step essentially classifies pixels in foe input image as either being of interest or not of interest. We have also used GENIE segmentation output for this Image Segmentation step. 2 Contour Extraction (patent submitted). This takes the output of Step I and extracts contours for the blobs consisting of pixels of interest. 3) Constrained Delaunay Triangulation. This is a well-known geometric transformation that creates triangles inside the contours. 4 Chordal Axis Transform (CAT) . This patented geometric transformation takes the triangulation output from Step 3 and creates a concise and accurate structural representation of a contour. From the CAT, we create a linguistic string, with associated metrical information, that provides a detailed structural representation of a contour. 5.) Normalization. This takes an attributed linguistic string output from Step 4 and balances it. This ensures that the linguistic representation accurately represents the major sections of the contour. Steps 6 and 7 are implemented by the multi-scale characterization and matching software. 6) Multi scale Characterization. This takes as input the attributed linguistic string output from Normalization. Rules from a context free grammar are applied in reverse to create a tree-like representation for each contour. For example, one of the grammar’s rules is L -> (LL ). When an (LL) is seen in a string, a parent node is created that points to the four

  19. Parallel processing for computer vision and display

    SciTech Connect

    Dew, P.M. . Dept. of Computer Studies); Earnshaw, R.A. ); Heywood, T.R. )

    1989-01-01

    The widespread availability of high performance computers has led to an increased awareness of the importance of visualization techniques particularly in engineering and science. However, many visualization tasks involve processing large amounts of data or manipulating complex computer models of 3D objects. For example, in the field of computer aided engineering it is often necessary to display an edit solid object (see Plate 1) which can take many minutes even on the fastest serial processors. Another example of a computationally intensive problem, this time from computer vision, is the recognition of objects in a 3D scene from a stereo image pair. To perform visualization tasks of this type in real and reasonable time it is necessary to exploit the advances in parallel processing that have taken place over the last decade. This book uniquely provides a collection of papers from leading visualization researchers with a common interest in the application and exploitation of parallel processing techniques.

  20. Machine vision is not computer vision

    NASA Astrophysics Data System (ADS)

    Batchelor, Bruce G.; Charlier, Jean-Ray

    1998-10-01

    The identity of Machine Vision as an academic and practical subject of study is asserted. In particular, the distinction between Machine Vision on the one hand and Computer Vision, Digital Image Processing, Pattern Recognition and Artificial Intelligence on the other is emphasized. The article demonstrates through four cases studies that the active involvement of a person who is sensitive to the broad aspects of vision system design can avoid disaster and can often achieve a successful machine that would not otherwise have been possible. This article is a transcript of the key- note address presented at the conference. Since the proceedings are prepared and printed before the conference, it is not possible to include a record of the response to this paper made by the delegates during the round-table discussion. It is hoped to collate and disseminate these via the World Wide Web after the event. (A link will be provided at http://bruce.cs.cf.ac.uk/bruce/index.html.).

  1. Multitasking the code ARC3D. [for computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Barton, John T.; Hsiung, Christopher C.

    1986-01-01

    The CRAY multitasking system was developed in order to utilize all four processors and sharply reduce the wall clock run time. This paper describes the techniques used to modify the computational fluid dynamics code ARC3D for this run and analyzes the achieved speedup. The ARC3D code solves either the Euler or thin-layer N-S equations using an implicit approximate factorization scheme. Results indicate that multitask processing can be used to achieve wall clock speedup factors of over three times, depending on the nature of the program code being used. Multitasking appears to be particularly advantageous for large-memory problems running on multiple CPU computers.

  2. High-speed weight estimation of whole herring (Clupea harengus) using 3D machine vision.

    PubMed

    Mathiassen, John Reidar; Misimi, Ekrem; Toldnes, Bendik; Bondø, Morten; Østvik, Stein Ove

    2011-08-01

    Weight is an important parameter by which the price of whole herring (Clupea harengus) is determined. Current mechanical weight graders are capable of a high throughput but have a relatively low accuracy. For this reason, there is a need for a more accurate high-speed weight estimation of whole herring. A 3-dimensional (3D) machine vision system was developed for high-speed weight estimation of whole herring. The system uses a 3D laser triangulation system above a conveyor belt moving at a speed of 1000 mm/s. Weight prediction models were developed for several feature sets, and a linear regression model using several 2-dimensional (2D) and 3D features enabled more accurate weight estimation than using 3D volume only. Using the combined 2D and 3D features, the root mean square error of cross-validation was 5.6 g, and the worst-case prediction error, evaluated by cross-validation, was ±14 g, for a sample (n = 179) of fresh whole herring. The proposed system has the potential to enable high-speed and accurate weight estimation of whole herring in the processing plants. The 3D machine vision system presented in this article enables high-speed and accurate weight estimation of whole herring, thus enabling an increase in profitability for the pelagic primary processors through a more accurate weight grading. © 2011 Institute of Food Technologists®

  3. Image quality enhancement and computation acceleration of 3D holographic display using a symmetrical 3D GS algorithm.

    PubMed

    Zhou, Pengcheng; Bi, Yong; Sun, Minyuan; Wang, Hao; Li, Fang; Qi, Yan

    2014-09-20

    The 3D Gerchberg-Saxton (GS) algorithm can be used to compute a computer-generated hologram (CGH) to produce a 3D holographic display. But, using the 3D GS method, there exists a serious distortion in reconstructions of binary input images. We have eliminated the distortion and improved the image quality of the reconstructions by a maximum of 486%, using a symmetrical 3D GS algorithm that is developed based on a traditional 3D GS algorithm. In addition, the hologram computation speed has been accelerated by 9.28 times, which is significant for real-time holographic displays.

  4. 3D measurement system based on computer-generated gratings

    NASA Astrophysics Data System (ADS)

    Zhu, Yongjian; Pan, Weiqing; Luo, Yanliang

    2010-08-01

    A new kind of 3D measurement system has been developed to achieve the 3D profile of complex object. The principle of measurement system is based on the triangular measurement of digital fringe projection, and the fringes are fully generated from computer. Thus the computer-generated four fringes form the data source of phase-shifting 3D profilometry. The hardware of system includes the computer, video camera, projector, image grabber, and VGA board with two ports (one port links to the screen, another to the projector). The software of system consists of grating projection module, image grabbing module, phase reconstructing module and 3D display module. A software-based synchronizing method between grating projection and image capture is proposed. As for the nonlinear error of captured fringes, a compensating method is introduced based on the pixel-to-pixel gray correction. At the same time, a least square phase unwrapping is used to solve the problem of phase reconstruction by using the combination of Log Modulation Amplitude and Phase Derivative Variance (LMAPDV) as weight. The system adopts an algorithm from Matlab Tool Box for camera calibration. The 3D measurement system has an accuracy of 0.05mm. The execution time of system is 3~5s for one-time measurement.

  5. Computer vision syndrome: a review.

    PubMed

    Blehm, Clayton; Vishnu, Seema; Khattak, Ashbala; Mitra, Shrabanee; Yee, Richard W

    2005-01-01

    As computers become part of our everyday life, more and more people are experiencing a variety of ocular symptoms related to computer use. These include eyestrain, tired eyes, irritation, redness, blurred vision, and double vision, collectively referred to as computer vision syndrome. This article describes both the characteristics and treatment modalities that are available at this time. Computer vision syndrome symptoms may be the cause of ocular (ocular-surface abnormalities or accommodative spasms) and/or extraocular (ergonomic) etiologies. However, the major contributor to computer vision syndrome symptoms by far appears to be dry eye. The visual effects of various display characteristics such as lighting, glare, display quality, refresh rates, and radiation are also discussed. Treatment requires a multidirectional approach combining ocular therapy with adjustment of the workstation. Proper lighting, anti-glare filters, ergonomic positioning of computer monitor and regular work breaks may help improve visual comfort. Lubricating eye drops and special computer glasses help relieve ocular surface-related symptoms. More work needs to be done to specifically define the processes that cause computer vision syndrome and to develop and improve effective treatments that successfully address these causes.

  6. 3D vision sensor and its algorithm on clone seedlings plant system

    NASA Astrophysics Data System (ADS)

    Hayashi, Jun-ichiro; Hiroyasu, Takehisa; Hojo, Hirotaka; Hata, Seiji; Okada, Hiroshi

    2007-01-01

    Today, vision systems for robots had been widely applied to many important applications. But 3-D vision systems for industrial uses should face to many practical problems. Here, a vision system for bio-production has been introduced. Clone seedlings plants are one of the important applications of biotechnology. Most of the production processes of clone seedlings plants are highly automated, but the transplanting process of the small seedlings plants cannot be automated because the shape of small seedlings plants are not stable and in order to handle the seedlings plants it is required to observe the shapes of the small seedlings plants. In this research, a robot vision system has been introduced for the transplanting process in a plant factory.

  7. 3D mosaic method in monocular vision measurement system for large-scale equipment

    NASA Astrophysics Data System (ADS)

    Xu, Qiaoyu; Wang, Junwei; Che, Rensheng

    2010-08-01

    In order to enlarge the measurement range of the monocular vision measurement system, enhance its measurement precision in depth direction, and realize the accurate measurement of large-scale equipement, a 3D mosaic method based on assistant target is proposed in this paper. By using the image of an assistant target in two adjacent measuring positions, on the basis of matching the images of feature points and epipolar constraint, it quickly computes the orientation and translation transform between the two adjacent measuring positions through linear algorithm and LM iteration method, and the scale factor of the translation is calculated from the distances between the feature points on the assistant target. At last, the measurement data in the local coordinate systems of different measuring positions can be transformed to the global measurement coordinate system with the transformation between the two adjacent measuring positions, and the three-dimensional mosaic of the measurement data is realized. This method overcomes the problem that the precision of the feature point three-dimensional coordinates in different coordinate systems seriously affects the mosaic accuracy. The experiment result proves that the proposed method is flexible and valid, and not only enlarges the measurement scope, but also raises the measurement precision for large-scale equipement.

  8. The 3-D inelastic analyses for computational structural mechanics

    NASA Technical Reports Server (NTRS)

    Hopkins, D. A.; Chamis, C. C.

    1989-01-01

    The 3-D inelastic analysis method is a focused program with the objective to develop computationally effective analysis methods and attendant computer codes for three-dimensional, nonlinear time and temperature dependent problems present in the hot section of turbojet engine structures. Development of these methods was a major part of the Hot Section Technology (HOST) program over the past five years at Lewis Research Center.

  9. Intelligent robots and computer vision

    SciTech Connect

    Casasent, D.P.

    1985-01-01

    This book presents the papers given at a conference which examined artificial intelligence and image processing in relation to robotics. Topics considered at the conference included feature extraction and pattern recognition for computer vision, image processing for intelligent robotics, robot sensors, image understanding and artificial intelligence, optical processing techniques in robotic applications, robot languages and programming, processor architectures for computer vision, mobile robots, multisensor fusion, three-dimensional modeling and recognition, intelligent robots applications, and intelligent robot systems.

  10. Structure light telecentric stereoscopic vision 3D measurement system based on Scheimpflug condition

    NASA Astrophysics Data System (ADS)

    Mei, Qing; Gao, Jian; Lin, Hui; Chen, Yun; Yunbo, He; Wang, Wei; Zhang, Guanjin; Chen, Xin

    2016-11-01

    We designed a new three-dimensional (3D) measurement system for micro components: a structure light telecentric stereoscopic vision 3D measurement system based on the Scheimpflug condition. This system creatively combines the telecentric imaging model and the Scheimpflug condition on the basis of structure light stereoscopic vision, having benefits of a wide measurement range, high accuracy, fast speed, and low price. The system measurement range is 20 mm×13 mm×6 mm, the lateral resolution is 20 μm, and the practical vertical resolution reaches 2.6 μm, which is close to the theoretical value of 2 μm and well satisfies the 3D measurement needs of micro components such as semiconductor devices, photoelectron elements, and micro-electromechanical systems. In this paper, we first introduce the principle and structure of the system and then present the system calibration and 3D reconstruction. We then present an experiment that was performed for the 3D reconstruction of the surface topography of a wafer, followed by a discussion. Finally, the conclusions are presented.

  11. A Vision-Aided 3D Path Teaching Method before Narrow Butt Joint Welding

    PubMed Central

    Zeng, Jinle; Chang, Baohua; Du, Dong; Peng, Guodong; Chang, Shuhe; Hong, Yuxiang; Wang, Li; Shan, Jiguo

    2017-01-01

    For better welding quality, accurate path teaching for actuators must be achieved before welding. Due to machining errors, assembly errors, deformations, etc., the actual groove position may be different from the predetermined path. Therefore, it is significant to recognize the actual groove position using machine vision methods and perform an accurate path teaching process. However, during the teaching process of a narrow butt joint, the existing machine vision methods may fail because of poor adaptability, low resolution, and lack of 3D information. This paper proposes a 3D path teaching method for narrow butt joint welding. This method obtains two kinds of visual information nearly at the same time, namely 2D pixel coordinates of the groove in uniform lighting condition and 3D point cloud data of the workpiece surface in cross-line laser lighting condition. The 3D position and pose between the welding torch and groove can be calculated after information fusion. The image resolution can reach 12.5 μm. Experiments are carried out at an actuator speed of 2300 mm/min and groove width of less than 0.1 mm. The results show that this method is suitable for groove recognition before narrow butt joint welding and can be applied in path teaching fields of 3D complex components. PMID:28492481

  12. NeuroGlasses: a neural sensing healthcare system for 3-D vision technology.

    PubMed

    Gong, Fang; Xu, Wenyao; Lee, Jueh-Yu; He, Lei; Sarrafzadeh, Majid

    2012-03-01

    3-D vision technologies are extensively used in a wide variety of applications. Particularly glasses-based 3-D technology facilities are increasingly becoming affordable to our daily lives. Considering health issues raised by 3-D video technologies, to the best of our knowledge, most of the pilot studies are practiced in a highly-controlled laboratory environment only. In this paper, we present NeuroGlasses, a nonintrusive wearable physiological signal monitoring system to facilitate health analysis and diagnosis of 3-D video watchers. The NeuroGlasses system acquires health-related signals by physiological sensors and provides feedbacks of health-related features. Moreover, the NeuroGlasses system employs signal-specific reconstruction and feature extraction to compensate the distortion of signals caused by variation of the placement of the sensors. We also propose a server-based NeuroGlasses infrastructure where physiological features can be extracted for real-time response or collected on the server side for long term analysis and diagnosis. Through an on-campus pilot study, the experimental results show that NeuroGlasses system can effectively provide physiological information for healthcare purpose. Furthermore, it approves that 3-D vision technology has a significant impact on the physiological signals, such as EEG, which potentially leads to neural diseases. © 2012 IEEE

  13. Education System Using Interactive 3D Computer Graphics (3D-CG) Animation and Scenario Language for Teaching Materials

    ERIC Educational Resources Information Center

    Matsuda, Hiroshi; Shindo, Yoshiaki

    2006-01-01

    The 3D computer graphics (3D-CG) animation using a virtual actor's speaking is very effective as an educational medium. But it takes a long time to produce a 3D-CG animation. To reduce the cost of producing 3D-CG educational contents and improve the capability of the education system, we have developed a new education system using Virtual Actor.…

  14. Education System Using Interactive 3D Computer Graphics (3D-CG) Animation and Scenario Language for Teaching Materials

    ERIC Educational Resources Information Center

    Matsuda, Hiroshi; Shindo, Yoshiaki

    2006-01-01

    The 3D computer graphics (3D-CG) animation using a virtual actor's speaking is very effective as an educational medium. But it takes a long time to produce a 3D-CG animation. To reduce the cost of producing 3D-CG educational contents and improve the capability of the education system, we have developed a new education system using Virtual Actor.…

  15. On the use of orientation filters for 3D reconstruction in event-driven stereo vision

    PubMed Central

    Camuñas-Mesa, Luis A.; Serrano-Gotarredona, Teresa; Ieng, Sio H.; Benosman, Ryad B.; Linares-Barranco, Bernabe

    2014-01-01

    The recently developed Dynamic Vision Sensors (DVS) sense visual information asynchronously and code it into trains of events with sub-micro second temporal resolution. This high temporal precision makes the output of these sensors especially suited for dynamic 3D visual reconstruction, by matching corresponding events generated by two different sensors in a stereo setup. This paper explores the use of Gabor filters to extract information about the orientation of the object edges that produce the events, therefore increasing the number of constraints applied to the matching algorithm. This strategy provides more reliably matched pairs of events, improving the final 3D reconstruction. PMID:24744694

  16. On the use of orientation filters for 3D reconstruction in event-driven stereo vision.

    PubMed

    Camuñas-Mesa, Luis A; Serrano-Gotarredona, Teresa; Ieng, Sio H; Benosman, Ryad B; Linares-Barranco, Bernabe

    2014-01-01

    The recently developed Dynamic Vision Sensors (DVS) sense visual information asynchronously and code it into trains of events with sub-micro second temporal resolution. This high temporal precision makes the output of these sensors especially suited for dynamic 3D visual reconstruction, by matching corresponding events generated by two different sensors in a stereo setup. This paper explores the use of Gabor filters to extract information about the orientation of the object edges that produce the events, therefore increasing the number of constraints applied to the matching algorithm. This strategy provides more reliably matched pairs of events, improving the final 3D reconstruction.

  17. FUN3D and CFL3D Computations for the First High Lift Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Lee-Rausch, Elizabeth M.; Rumsey, Christopher L.

    2011-01-01

    Two Reynolds-averaged Navier-Stokes codes were used to compute flow over the NASA Trapezoidal Wing at high lift conditions for the 1st AIAA CFD High Lift Prediction Workshop, held in Chicago in June 2010. The unstructured-grid code FUN3D and the structured-grid code CFL3D were applied to several different grid systems. The effects of code, grid system, turbulence model, viscous term treatment, and brackets were studied. The SST model on this configuration predicted lower lift than the Spalart-Allmaras model at high angles of attack; the Spalart-Allmaras model agreed better with experiment. Neglecting viscous cross-derivative terms caused poorer prediction in the wing tip vortex region. Output-based grid adaptation was applied to the unstructured-grid solutions. The adapted grids better resolved wake structures and reduced flap flow separation, which was also observed in uniform grid refinement studies. Limitations of the adaptation method as well as areas for future improvement were identified.

  18. Angle extended linear MEMS scanning system for 3D laser vision sensor

    NASA Astrophysics Data System (ADS)

    Pang, Yajun; Zhang, Yinxin; Yang, Huaidong; Zhu, Pan; Gai, Ye; Zhao, Jian; Huang, Zhanhua

    2016-09-01

    Scanning system is often considered as the most important part for 3D laser vision sensor. In this paper, we propose a method for the optical system design of angle extended linear MEMS scanning system, which has features of huge scanning degree, small beam divergence angle and small spot size for 3D laser vision sensor. The principle of design and theoretical formulas are derived strictly. With the help of software ZEMAX, a linear scanning optical system based on MEMS has been designed. Results show that the designed system can extend scanning angle from ±8° to ±26.5° with a divergence angle small than 3.5 mr, and the spot size is reduced for 4.545 times.

  19. Faster acquisition of laparoscopic skills in virtual reality with haptic feedback and 3D vision.

    PubMed

    Hagelsteen, Kristine; Langegård, Anders; Lantz, Adam; Ekelund, Mikael; Anderberg, Magnus; Bergenfelz, Anders

    2017-10-01

    The study investigated whether 3D vision and haptic feedback in combination in a virtual reality environment leads to more efficient learning of laparoscopic skills in novices. Twenty novices were allocated to two groups. All completed a training course in the LapSim(®) virtual reality trainer consisting of four tasks: 'instrument navigation', 'grasping', 'fine dissection' and 'suturing'. The study group performed with haptic feedback and 3D vision and the control group without. Before and after the LapSim(®) course, the participants' metrics were recorded when tying a laparoscopic knot in the 2D video box trainer Simball(®) Box. The study group completed the training course in 146 (100-291) minutes compared to 215 (175-489) minutes in the control group (p = .002). The number of attempts to reach proficiency was significantly lower. The study group had significantly faster learning of skills in three out of four individual tasks; instrument navigation, grasping and suturing. Using the Simball(®) Box, no difference in laparoscopic knot tying after the LapSim(®) course was noted when comparing the groups. Laparoscopic training in virtual reality with 3D vision and haptic feedback made training more time efficient and did not negatively affect later video box-performance in 2D. [Formula: see text].

  20. Computational model of stereoscopic 3D visual saliency.

    PubMed

    Wang, Junle; Da Silva, Matthieu Perreira; Le Callet, Patrick; Ricordel, Vincent

    2013-06-01

    Many computational models of visual attention performing well in predicting salient areas of 2D images have been proposed in the literature. The emerging applications of stereoscopic 3D display bring an additional depth of information affecting the human viewing behavior, and require extensions of the efforts made in 2D visual modeling. In this paper, we propose a new computational model of visual attention for stereoscopic 3D still images. Apart from detecting salient areas based on 2D visual features, the proposed model takes depth as an additional visual dimension. The measure of depth saliency is derived from the eye movement data obtained from an eye-tracking experiment using synthetic stimuli. Two different ways of integrating depth information in the modeling of 3D visual attention are then proposed and examined. For the performance evaluation of 3D visual attention models, we have created an eye-tracking database, which contains stereoscopic images of natural content and is publicly available, along with this paper. The proposed model gives a good performance, compared to that of state-of-the-art 2D models on 2D images. The results also suggest that a better performance is obtained when depth information is taken into account through the creation of a depth saliency map, rather than when it is integrated by a weighting method.

  1. NASA's 3D Flight Computer for Space Applications

    NASA Technical Reports Server (NTRS)

    Alkalai, Leon

    2000-01-01

    The New Millennium Program (NMP) Integrated Product Development Team (IPDT) for Microelectronics Systems was planning to validate a newly developed 3D Flight Computer system on its first deep-space flight, DS1, launched in October 1998. This computer, developed in the 1995-97 time frame, contains many new computer technologies previously never used in deep-space systems. They include: advanced 3D packaging architecture for future low-mass and low-volume avionics systems; high-density 3D packaged chip-stacks for both volatile and non-volatile mass memory: 400 Mbytes of local DRAM memory, and 128 Mbytes of Flash memory; high-bandwidth Peripheral Component Interface (Per) local-bus with a bridge to VME; high-bandwidth (20 Mbps) fiber-optic serial bus; and other attributes, such as standard support for Design for Testability (DFT). Even though this computer system did not complete on time for delivery to the DS1 project, it was an important development along a technology roadmap towards highly integrated and highly miniaturized avionics systems for deep-space applications. This continued technology development is now being performed by NASA's Deep Space System Development Program (also known as X2000) and within JPL's Center for Integrated Space Microsystems (CISM).

  2. Application of 3DHiVision: a system with a new 3D HD renderer

    NASA Astrophysics Data System (ADS)

    Sun, Peter; Nagata, Shojiro

    2006-02-01

    This paper discusses about some technology breakthroughs to help solve the difficulties that have been clogging the popularity of 3D Stereo. We name this 3DHiVision (3DHV) System Solution. With the advance in technology, modern projection systems and stereo LCD panels have made it possible for a lot more people to enjoy a 3D stereo video experience in a broader range of applications. However, the key limitations to more mainstream applications of 3D video have been the availability of 3D contents and the cost and the complexity of 3D video production, content management and playback systems. With the easy availability of the modern PC based video production tools, advance in the technology of the projection systems and the great interest highly increased in 3D applications, the 3D video industry still remains stagnant and restricted within a small scale. It is because the amount of the cost for the production and playback of high quality 3D video has always been to such an extent that it challenges the limitations of our imagination. Great as these difficulties seem to be, we have surmounted them all and created a complete end-to-end 3DHiVision (3DHV for short) Video system based on an embedded PC platform, which significantly reduces the cost and complexity of creating museum quality 3D video. With this achievement, professional film makers and amateurs as well will be able to easily create, distribute and playback 3D video contents. The HD-Renderer is the central component in our 3DHV solution line. It is a highly efficient software capable of decrypting, decoding, dynamically parallax adjusting and rendering HD video contents up to 1920x1080x2x30p in real-time on an embedded PC (for theaters) or any other home PC (for main stream) with the 3.0GHz P4 CPU / GeForce6600GT GPU hardware requirements or above. And the 1280x720x2x30p contents can be performed with great ease on a notebook with 1.7GHz P4Mobile CPU / GeForce6200 GPU at the time when this paper is written.

  3. A fast 3-D object recognition algorithm for the vision system of a special-purpose dexterous manipulator

    NASA Technical Reports Server (NTRS)

    Hung, Stephen H. Y.

    1989-01-01

    A fast 3-D object recognition algorithm that can be used as a quick-look subsystem to the vision system for the Special-Purpose Dexterous Manipulator (SPDM) is described. Global features that can be easily computed from range data are used to characterize the images of a viewer-centered model of an object. This algorithm will speed up the processing by eliminating the low level processing whenever possible. It may identify the object, reject a set of bad data in the early stage, or create a better environment for a more powerful algorithm to carry the work further.

  4. Advanced computational tools for 3-D seismic analysis

    SciTech Connect

    Barhen, J.; Glover, C.W.; Protopopescu, V.A.

    1996-06-01

    The global objective of this effort is to develop advanced computational tools for 3-D seismic analysis, and test the products using a model dataset developed under the joint aegis of the United States` Society of Exploration Geophysicists (SEG) and the European Association of Exploration Geophysicists (EAEG). The goal is to enhance the value to the oil industry of the SEG/EAEG modeling project, carried out with US Department of Energy (DOE) funding in FY` 93-95. The primary objective of the ORNL Center for Engineering Systems Advanced Research (CESAR) is to spearhead the computational innovations techniques that would enable a revolutionary advance in 3-D seismic analysis. The CESAR effort is carried out in collaboration with world-class domain experts from leading universities, and in close coordination with other national laboratories and oil industry partners.

  5. Computational issues connected with 3D N-body simulations

    NASA Astrophysics Data System (ADS)

    Pfenniger, D.; Friedli, D.

    1993-03-01

    Computational problems related to modeling gravitational systems, and running and analyzing 3D N-body models are discussed. N-body simulations using Particle-Mesh techniques with polar grids are especially well-suited, and physically justified, when studying quiet evolutionary processes in disk galaxies. This technique allows large N, high central resolution, and is still the fastest one. Regardless of the method chosen to compute gravitation, softening is a compromise between HF amplification and resolution. Softened spherical and ellipsoidal kernels with variable resolution are set up. Detailed characteristics of the 3D polar grid, tests, code performances, and vectorization rates are also given. For integrating motion in rotating coordinates, a stable symplectic extension of the leap-frog algorithm is described. The technique used to search for periodic orbits in arbitrary N-body potentials and to determine their stability is explained.

  6. Computer vision in cell biology.

    PubMed

    Danuser, Gaudenz

    2011-11-23

    Computer vision refers to the theory and implementation of artificial systems that extract information from images to understand their content. Although computers are widely used by cell biologists for visualization and measurement, interpretation of image content, i.e., the selection of events worth observing and the definition of what they mean in terms of cellular mechanisms, is mostly left to human intuition. This Essay attempts to outline roles computer vision may play and should play in image-based studies of cellular life. Copyright © 2011 Elsevier Inc. All rights reserved.

  7. Development of a computer controlled 3-d braiding machine

    SciTech Connect

    Yan Jianhua; Li Jialu

    1994-12-31

    This paper deals with development of a large size, multiuse, controlled 3-D cartesian grid braiding machine, its function and application. The 180 column and 120 tracks, the flexible and low power consuming driving system, the error detector systems and the computer controlling system are the major parts of the machine. The machine can produce wide variety of size. shape and pattern of fabrics and can also produce several fabrics at a time.

  8. Geometric Modeling for Computer Vision

    DTIC Science & Technology

    1974-10-01

    Vision and Artificial Intellegence could lead to robots, androids and cyborgs which will be able to see, to think and to feel conscious 10.4...the construction of computer representations of physical objects, cameras, images and light for the sake of simulating their behavior. In Artificial ...specifically, I wish to exclude the connotation that the theory is a natural theory of vision. Perhaps there can be such a thing as an artificial theory

  9. Noninvasive computational imaging of cardiac electrophysiology for 3-D infarct.

    PubMed

    Wang, Linwei; Wong, Ken C L; Zhang, Heye; Liu, Huafeng; Shi, Pengcheng

    2011-04-01

    Myocardial infarction (MI) creates electrophysiologically altered substrates that are responsible for ventricular arrhythmias, such as tachycardia and fibrillation. The presence, size, location, and composition of infarct scar bear significant prognostic and therapeutic implications for individual subjects. We have developed a statistical physiological model-constrained framework that uses noninvasive body-surface-potential data and tomographic images to estimate subject-specific transmembrane-potential (TMP) dynamics inside the 3-D myocardium. In this paper, we adapt this framework for the purpose of noninvasive imaging, detection, and quantification of 3-D scar mass for postMI patients: the framework requires no prior knowledge of MI and converges to final subject-specific TMP estimates after several passes of estimation with intermediate feedback; based on the primary features of the estimated spatiotemporal TMP dynamics, we provide 3-D imaging of scar tissue and quantitative evaluation of scar location and extent. Phantom experiments were performed on a computational model of realistic heart-torso geometry, considering 87 transmural infarct scars of different sizes and locations inside the myocardium, and 12 compact infarct scars (extent between 10% and 30%) at different transmural depths. Real-data experiments were carried out on BSP and magnetic resonance imaging (MRI) data from four postMI patients, validated by gold standards and existing results. This framework shows unique advantage of noninvasive, quantitative, computational imaging of subject-specific TMP dynamics and infarct mass of the 3-D myocardium, with the potential to reflect details in the spatial structure and tissue composition/heterogeneity of 3-D infarct scar.

  10. Majority logic gate for 3D magnetic computing.

    PubMed

    Eichwald, Irina; Breitkreutz, Stephan; Ziemys, Grazvydas; Csaba, György; Porod, Wolfgang; Becherer, Markus

    2014-08-22

    For decades now, microelectronic circuits have been exclusively built from transistors. An alternative way is to use nano-scaled magnets for the realization of digital circuits. This technology, known as nanomagnetic logic (NML), may offer significant improvements in terms of power consumption and integration densities. Further advantages of NML are: non-volatility, radiation hardness, and operation at room temperature. Recent research focuses on the three-dimensional (3D) integration of nanomagnets. Here we show, for the first time, a 3D programmable magnetic logic gate. Its computing operation is based on physically field-interacting nanometer-scaled magnets arranged in a 3D manner. The magnets possess a bistable magnetization state representing the Boolean logic states '0' and '1.' Magneto-optical and magnetic force microscopy measurements prove the correct operation of the gate over many computing cycles. Furthermore, micromagnetic simulations confirm the correct functionality of the gate even for a size in the nanometer-domain. The presented device demonstrates the potential of NML for three-dimensional digital computing, enabling the highest integration densities.

  11. Robust object tracking techniques for vision-based 3D motion analysis applications

    NASA Astrophysics Data System (ADS)

    Knyaz, Vladimir A.; Zheltov, Sergey Y.; Vishnyakov, Boris V.

    2016-04-01

    Automated and accurate spatial motion capturing of an object is necessary for a wide variety of applications including industry and science, virtual reality and movie, medicine and sports. For the most part of applications a reliability and an accuracy of the data obtained as well as convenience for a user are the main characteristics defining the quality of the motion capture system. Among the existing systems for 3D data acquisition, based on different physical principles (accelerometry, magnetometry, time-of-flight, vision-based), optical motion capture systems have a set of advantages such as high speed of acquisition, potential for high accuracy and automation based on advanced image processing algorithms. For vision-based motion capture accurate and robust object features detecting and tracking through the video sequence are the key elements along with a level of automation of capturing process. So for providing high accuracy of obtained spatial data the developed vision-based motion capture system "Mosca" is based on photogrammetric principles of 3D measurements and supports high speed image acquisition in synchronized mode. It includes from 2 to 4 technical vision cameras for capturing video sequences of object motion. The original camera calibration and external orientation procedures provide the basis for high accuracy of 3D measurements. A set of algorithms as for detecting, identifying and tracking of similar targets, so for marker-less object motion capture is developed and tested. The results of algorithms' evaluation show high robustness and high reliability for various motion analysis tasks in technical and biomechanics applications.

  12. Computational Challenges of 3D Radiative Transfer in Atmospheric Models

    NASA Astrophysics Data System (ADS)

    Jakub, Fabian; Bernhard, Mayer

    2017-04-01

    The computation of radiative heating and cooling rates is one of the most expensive components in todays atmospheric models. The high computational cost stems not only from the laborious integration over a wide range of the electromagnetic spectrum but also from the fact that solving the integro-differential radiative transfer equation for monochromatic light is already rather involved. This lead to the advent of numerous approximations and parameterizations to reduce the cost of the solver. One of the most prominent one is the so called independent pixel approximations (IPA) where horizontal energy transfer is neglected whatsoever and radiation may only propagate in the vertical direction (1D). Recent studies implicate that the IPA introduces significant errors in high resolution simulations and affects the evolution and development of convective systems. However, using fully 3D solvers such as for example MonteCarlo methods is not even on state of the art supercomputers feasible. The parallelization of atmospheric models is often realized by a horizontal domain decomposition, and hence, horizontal transfer of energy necessitates communication. E.g. a cloud's shadow at a low zenith angle will cast a long shadow and potentially needs to communication through a multitude of processors. Especially light in the solar spectral range may travel long distances through the atmosphere. Concerning highly parallel simulations, it is vital that 3D radiative transfer solvers put a special emphasis on parallel scalability. We will present an introduction to intricacies computing 3D radiative heating and cooling rates as well as report on the parallel performance of the TenStream solver. The TenStream is a 3D radiative transfer solver using the PETSc framework to iteratively solve a set of partial differential equation. We investigate two matrix preconditioners, (a) geometric algebraic multigrid preconditioning(MG+GAMG) and (b) block Jacobi incomplete LU (ILU) factorization. The

  13. [Comparison study between biological vision and computer vision].

    PubMed

    Liu, W; Yuan, X G; Yang, C X; Liu, Z Q; Wang, R

    2001-08-01

    The development and bearing of biology vision in structure and mechanism were discussed, especially on the aspects including anatomical structure of biological vision, tentative classification of reception field, parallel processing of visual information, feedback and conformity effect of visual cortical, and so on. The new advance in the field was introduced through the study of the morphology of biological vision. Besides, comparison between biological vision and computer vision was made, and their similarities and differences were pointed out.

  14. For 3D laparoscopy: a step toward advanced surgical navigation: how to get maximum benefit from 3D vision.

    PubMed

    Kunert, Wolfgang; Storz, Pirmin; Kirschniak, Andreas

    2013-02-01

    The authors are grateful for the interesting perspectives given by Buchs and colleagues in their letter to the editor entitled "3D Laparoscopy: A Step Toward Advanced Surgical Navigation." Shutter-based 3D video systems failed to become established in the operating room in the late 1990s. To strengthen the starting conditions of the new 3D technology using better monitors and high definition, the authors give suggestions for its practical use in the clinical routine. But first they list the characteristics of single-channeled and bichanneled 3D laparoscopes and describe stereoscopic terms such as "comfort zone," "stereoscopic window," and "near-point distance." The authors believe it would be helpful to have the 3D pioneers assemble and share their experiences with these suggestions. Although this letter discusses "laparoscopy," it would also be interesting to collect experiences from other surgical disciplines, especially when one is considering whether to opt for bi- or single-channeled optics.

  15. Extensible 3D architecture for superconducting quantum computing

    NASA Astrophysics Data System (ADS)

    Liu, Qiang; Li, Mengmeng; Dai, Kunzhe; Zhang, Ke; Xue, Guangming; Tan, Xinsheng; Yu, Haifeng; Yu, Yang

    2017-06-01

    Using a multi-layered printed circuit board, we propose a 3D architecture suitable for packaging superconducting chips, especially chips that contain two-dimensional qubit arrays. In our proposed architecture, the center strips of the buried coplanar waveguides protrude from the surface of a dielectric layer as contacts. Since the contacts extend beyond the surface of the dielectric layer, chips can simply be flip-chip packaged with on-chip receptacles clinging to the contacts. Using this scheme, we packaged a multi-qubit chip and performed single-qubit and two-qubit quantum gate operations. The results indicate that this 3D architecture provides a promising scheme for scalable quantum computing.

  16. Computer vision in nuclear medicine

    NASA Astrophysics Data System (ADS)

    Sommer, Gerald

    1990-11-01

    Computervision is used to overcome the mismatch between user models and implementation models of software systems for image analysis in nuclear medicine. Computer vision in nuclear medicine results in an active support of the user by the system. This is reached by modeling of imaging equipment and schedules scenes of interest and the process of visual image interpretation. Computer vision is demonstrated especially in the low level and medium level range. Special highlights are given for the estimation of image quality an uniform approach to enhancement and restoration of images and analysis of shape and dynamics of patterns. 1.

  17. 3D seismic imaging on massively parallel computers

    SciTech Connect

    Womble, D.E.; Ober, C.C.; Oldfield, R.

    1997-02-01

    The ability to image complex geologies such as salt domes in the Gulf of Mexico and thrusts in mountainous regions is a key to reducing the risk and cost associated with oil and gas exploration. Imaging these structures, however, is computationally expensive. Datasets can be terabytes in size, and the processing time required for the multiple iterations needed to produce a velocity model can take months, even with the massively parallel computers available today. Some algorithms, such as 3D, finite-difference, prestack, depth migration remain beyond the capacity of production seismic processing. Massively parallel processors (MPPs) and algorithms research are the tools that will enable this project to provide new seismic processing capabilities to the oil and gas industry. The goals of this work are to (1) develop finite-difference algorithms for 3D, prestack, depth migration; (2) develop efficient computational approaches for seismic imaging and for processing terabyte datasets on massively parallel computers; and (3) develop a modular, portable, seismic imaging code.

  18. Harnessing vision for computation.

    PubMed

    Changizi, Mark

    2008-01-01

    Might it be possible to harness the visual system to carry out artificial computations, somewhat akin to how DNA has been harnessed to carry out computation? I provide the beginnings of a research programme attempting to do this. In particular, new techniques are described for building 'visual circuits' (or 'visual software') using wire, NOT, OR, and AND gates in a visual 6modality such that our visual system acts as 'visual hardware' computing the circuit, and generating a resultant perception which is the output.

  19. Research on 3D reconstruction measurement and parameter of cavitation bubble based on stereo vision

    NASA Astrophysics Data System (ADS)

    Li, Shengyong; Ai, Xiaochuan; Wu, Ronghua; Cao, Jing

    2017-02-01

    The problems caused by the cavitation bubble and caused many adverse effects on the ship propeller, hydraulic machinery and equipment. In order to research the production mechanism of cavitation bubble under different conditions, cavitation bubble zone parameter fine measurement and analysis technology is indispensable, this paper adopts a non-contact measurement method of optical autonomous construction of binocular stereo vision measurement system according to the characteristics of cavitation bubble, the texture features are not clear, transparent and difficult to obtain, 3D imaging measurement of cavitation bubble using composite dynamic lighting, and 3D reconstruction of cavitation bubble region and obtained the characteristics of more accurate parameters, test results show that the cavitation bubble characteristics of the fine technology can obtain and analyze cavitation bubble region and instability.

  20. Relative stereo 3-D vision sensor and its application for nursery plant transplanting

    NASA Astrophysics Data System (ADS)

    Hata, Seiji; Hayashi, Junichiro; Takahashi, Satoru; Hojo, Hirotaka

    2007-10-01

    Clone nursery plants production is one of the important applications of bio-technology. Most of the production processes of bio-production are highly automated, but the transplanting process of the small nursery plants cannot be automated because the figures of small nursery plants are not stable. In this research, a transplanting robot system for clone nursery plants production is under development. 3-D vision system using relative stereo method detects the shapes and positions of small nursery plants through transparent vessels. A force controlled robot picks up the plants and transplants into a vessels with artificial soil.

  1. Computing Radiative Transfer in a 3D Medium

    NASA Technical Reports Server (NTRS)

    Von Allmen, Paul; Lee, Seungwon

    2012-01-01

    A package of software computes the time-dependent propagation of a narrow laser beam in an arbitrary three- dimensional (3D) medium with absorption and scattering, using the transient-discrete-ordinates method and a direct integration method. Unlike prior software that utilizes a Monte Carlo method, this software enables simulation at very small signal-to-noise ratios. The ability to simulate propagation of a narrow laser beam in a 3D medium is an improvement over other discrete-ordinate software. Unlike other direct-integration software, this software is not limited to simulation of propagation of thermal radiation with broad angular spread in three dimensions or of a laser pulse with narrow angular spread in two dimensions. Uses for this software include (1) computing scattering of a pulsed laser beam on a material having given elastic scattering and absorption profiles, and (2) evaluating concepts for laser-based instruments for sensing oceanic turbulence and related measurements of oceanic mixed-layer depths. With suitable augmentation, this software could be used to compute radiative transfer in ultrasound imaging in biological tissues, radiative transfer in the upper Earth crust for oil exploration, and propagation of laser pulses in telecommunication applications.

  2. Toward 3D Reconstruction of Outdoor Scenes Using an MMW Radar and a Monocular Vision Sensor

    PubMed Central

    El Natour, Ghina; Ait-Aider, Omar; Rouveure, Raphael; Berry, François; Faure, Patrice

    2015-01-01

    In this paper, we introduce a geometric method for 3D reconstruction of the exterior environment using a panoramic microwave radar and a camera. We rely on the complementarity of these two sensors considering the robustness to the environmental conditions and depth detection ability of the radar, on the one hand, and the high spatial resolution of a vision sensor, on the other. Firstly, geometric modeling of each sensor and of the entire system is presented. Secondly, we address the global calibration problem, which consists of finding the exact transformation between the sensors’ coordinate systems. Two implementation methods are proposed and compared, based on the optimization of a non-linear criterion obtained from a set of radar-to-image target correspondences. Unlike existing methods, no special configuration of the 3D points is required for calibration. This makes the methods flexible and easy to use by a non-expert operator. Finally, we present a very simple, yet robust 3D reconstruction method based on the sensors’ geometry. This method enables one to reconstruct observed features in 3D using one acquisition (static sensor), which is not always met in the state of the art for outdoor scene reconstruction. The proposed methods have been validated with synthetic and real data. PMID:26473874

  3. Toward 3D reconstruction of outdoor scenes using an MMW radar and a monocular vision sensor.

    PubMed

    Natour, Ghina El; Ait-Aider, Omar; Rouveure, Raphael; Berry, François; Faure, Patrice

    2015-10-14

    In this paper, we introduce a geometric method for 3D reconstruction of the exterior environment using a panoramic microwave radar and a camera. We rely on the complementarity of these two sensors considering the robustness to the environmental conditions and depth detection ability of the radar, on the one hand, and the high spatial resolution of a vision sensor, on the other. Firstly, geometric modeling of each sensor and of the entire system is presented. Secondly, we address the global calibration problem, which consists of finding the exact transformation between the sensors' coordinate systems. Two implementation methods are proposed and compared, based on the optimization of a non-linear criterion obtained from a set of radar-to-image target correspondences. Unlike existing methods, no special configuration of the 3D points is required for calibration. This makes the methods flexible and easy to use by a non-expert operator. Finally, we present a very simple, yet robust 3D reconstruction method based on the sensors' geometry. This method enables one to reconstruct observed features in 3D using one acquisition (static sensor), which is not always met in the state of the art for outdoor scene reconstruction. The proposed methods have been validated with synthetic and real data.

  4. Vision-Based Long-Range 3D Tracking, applied to Underground Surveying Tasks

    NASA Astrophysics Data System (ADS)

    Mossel, Annette; Gerstweiler, Georg; Vonach, Emanuel; Kaufmann, Hannes; Chmelina, Klaus

    2014-04-01

    To address the need of highly automated positioning systems in underground construction, we present a long-range 3D tracking system based on infrared optical markers. It provides continuous 3D position estimation of static or kinematic targets with low latency over a tracking volume of 12 m x 8 m x 70 m (width x height x depth). Over the entire volume, relative 3D point accuracy with a maximal deviation ≤ 22 mm is ensured with possible target rotations of yaw, pitch = 0 - 45° and roll = 0 - 360°. No preliminary sighting of target(s) is necessary since the system automatically locks onto a target without user intervention and autonomously starts tracking as soon as a target is within the view of the system. The proposed system needs a minimal hardware setup, consisting of two machine vision cameras and a standard workstation for data processing. This allows for quick installation with minimal disturbance of construction work. The data processing pipeline ensures camera calibration and tracking during on-going underground activities. Tests in real underground scenarios prove the system's capabilities to act as 3D position measurement platform for multiple underground tasks that require long range, low latency and high accuracy. Those tasks include simultaneously tracking of personnel, machines or robots.

  5. Morphological features of the macerated cranial bones registered by the 3D vision system for potential use in forensic anthropology.

    PubMed

    Skrzat, Janusz; Sioma, Andrzej; Kozerska, Magdalena

    2013-01-01

    In this paper we present potential usage of the 3D vision system for registering features of the macerated cranial bones. Applied 3D vision system collects height profiles of the object surface and from that data builds a three-dimensional image of the surface. This method appeared to be accurate enough to capture anatomical details of the macerated bones. With the aid of the 3D vision system we generated images of the surface of the human calvaria which was used for testing the system. Performed reconstruction visualized the imprints of the dural vascular system, cranial sutures, and the three-layer structure of the cranial bones observed in the cross-section. We figure out that the 3D vision system may deliver data which can enhance estimation of sex from the osteological material.

  6. 3D ultrasound computer tomography: update from a clinical study

    NASA Astrophysics Data System (ADS)

    Hopp, T.; Zapf, M.; Kretzek, E.; Henrich, J.; Tukalo, A.; Gemmeke, H.; Kaiser, C.; Knaudt, J.; Ruiter, N. V.

    2016-04-01

    Ultrasound Computer Tomography (USCT) is a promising new imaging method for breast cancer diagnosis. We developed a 3D USCT system and tested it in a pilot study with encouraging results: 3D USCT was able to depict two carcinomas, which were present in contrast enhanced MRI volumes serving as ground truth. To overcome severe differences in the breast shape, an image registration was applied. We analyzed the correlation between average sound speed in the breast and the breast density estimated from segmented MRIs and found a positive correlation with R=0.70. Based on the results of the pilot study we now carry out a successive clinical study with 200 patients. For this we integrated our reconstruction methods and image post-processing into a comprehensive workflow. It includes a dedicated DICOM viewer for interactive assessment of fused USCT images. A new preview mode now allows intuitive and faster patient positioning. We updated the USCT system to decrease the data acquisition time by approximately factor two and to increase the penetration depth of the breast into the USCT aperture by 1 cm. Furthermore the compute-intensive reflectivity reconstruction was considerably accelerated, now allowing a sub-millimeter volume reconstruction in approximately 16 minutes. The updates made it possible to successfully image first patients in our ongoing clinical study.

  7. Vision-Based 3D Motion Estimation for On-Orbit Proximity Satellite Tracking and Navigation

    DTIC Science & Technology

    2015-06-01

    Network .....................................................................................58 3. Telemetry Computer...screenshot of the telemetry software and the SSH terminals. ...........61 Figure 25. View of the VICON cameras above the granite flat floor of the FSS...point-wise kinematic models. The pose of the 3D structure is then estimated using a dual quaternion method [19]. The robustness and validity of this

  8. Incorporating polarization in stereo vision-based 3D perception of non-Lambertian scenes

    NASA Astrophysics Data System (ADS)

    Berger, Kai; Voorhies, Randolph; Matthies, Larry

    2016-05-01

    Surfaces with specular, non-Lambertian reflectance are common in urban areas. Robot perception systems for applications in urban environments need to function effectively in the presence of such materials; however, both passive and active 3-D perception systems have difficulties with them. In this paper, we develop an approach using a stereo pair of polarization cameras to improve passive 3-D perception of specular surfaces. We use a commercial stereo camera pair with rotatable polarization filters in front of each lens to capture images with multiple orientations of the polarization filter. From these images, we estimate the degree of linear polarization (DOLP) and the angle of polarization (AOP) at each pixel in at least one camera. The AOP constrains the corresponding surface normal in the scene to lie in the plane of the observed angle of polarization. We embody this constraint an energy functional for a regularization-based stereo vision algorithm. This paper describes the theory of polarization needed for this approach, describes the new stereo vision algorithm, and presents results on synthetic and real images to evaluate performance.

  9. Computer vision in microstructural analysis

    NASA Technical Reports Server (NTRS)

    Srinivasan, Malur N.; Massarweh, W.; Hough, C. L.

    1992-01-01

    The following is a laboratory experiment designed to be performed by advanced-high school and beginning-college students. It is hoped that this experiment will create an interest in and further understanding of materials science. The objective of this experiment is to demonstrate that the microstructure of engineered materials is affected by the processing conditions in manufacture, and that it is possible to characterize the microstructure using image analysis with a computer. The principle of computer vision will first be introduced followed by the description of the system developed at Texas A&M University. This in turn will be followed by the description of the experiment to obtain differences in microstructure and the characterization of the microstructure using computer vision.

  10. Design of 3D vision probe based on auto-focus

    NASA Astrophysics Data System (ADS)

    Liu, Qian; Yuan, Daocheng; Liu, Bo

    2010-11-01

    Machine vision now is widely used as non-contact metrology which is a trend of measurement. In this article, a 3D machine vision probe for engineering is designed. The XY axial measurement is done by 2D vision metrology, while the Z axial height is measured by microscope through auto-focus (AF). As the critical part of probe, a long work distance (WD) microscope is well designed. To attain the long WD, a positive and a negative lens group configure the microscope. The microscope, with resolution of 1μm and WD of 35mm, is quite closed to diffraction limited as evidenced from MTF (Modulation Transfer Function) chart.The AF, a key technology in probe designing, is particularly introduced. Images acquired by microscope are calculated to get the AF curve data. To make the AF curve smooth, the images are denoised and the curve is processed with a low pass filter (LPF). And a new method of curve fitting is involved to get the accuracy focused position.The measurement with probe shows that the uncertainty is 0.03μm at XY axial plane, while the uncertainty is less than 3μm at Z axial height. It indicates that our probe achieves requirements.

  11. Low computation vision-based navigation for a Martian rover

    NASA Technical Reports Server (NTRS)

    Gavin, Andrew S.; Brooks, Rodney A.

    1994-01-01

    Construction and design details of the Mobot Vision System, a small, self-contained, mobile vision system, are presented. This system uses the view from the top of a small, roving, robotic vehicle to supply data that is processed in real-time to safely navigate the surface of Mars. A simple, low-computation algorithm for constructing a 3-D navigational map of the Martian environment to be used by the rover is discussed.

  12. Computational analysis of flow in 3D propulsive transition ducts

    NASA Technical Reports Server (NTRS)

    Sepri, Paavo

    1990-01-01

    A numerical analysis of fully three dimensional, statistically steady flows in propulsive transition ducts being considered for use in future aircraft of higher maneuverability is investigated. The purpose of the transition duct is to convert axisymmetric flow from conventional propulsion systems to that of a rectangular geometry of high aspect ratio. In an optimal design, the transition duct would be of minimal length in order to reduce the weight penalty, while the geometrical change would be gradual enough to avoid detrimental flow perturbations. Recent experiments conducted at the Propulsion Aerodynamics Branch have indicated that thrust losses in ducts of superelliptic cross-section can be surprisingly low, even if flow separation occurs near the divergent walls. In order to address the objective of developing a rational design procedure for optimal transition ducts, it is necessary to have available a reliable computational tool for the analysis of flows achieved in a sequence of configurations. Current CFD efforts involving complicated geometries usually must contend with two separate but interactive aspects: namely, grid generation and flow solution. The first two avenues of the present investigation were comprised of suitable grid generation for a class of transition ducts of superelliptic cross-section, and the subsequent application of the flow solver PAB3D to this geometry. The code, PAB3D, was developed as a comprehensive tool for the solution of both internal and external high speed flows. The third avenue of investigation has involved analytical formulations to aid in the understanding of the nature of duct flows, and also to provide a basis of comparison for subsequent numerical solutions. Numerical results to date include the generation of two preliminary grid systems for duct flows, and the initial application of PAB3D to the corresponding geometries, which are of the class tested experimentally.

  13. Evaluating the performance of close-range 3D active vision systems for industrial design applications

    NASA Astrophysics Data System (ADS)

    Beraldin, J.-Angelo; Gaiani, Marco

    2004-12-01

    In recent years, active three-dimensional (3D) active vision systems or range cameras for short have come out of research laboratories to find niche markets in application fields as diverse as industrial design, automotive manufacturing, geomatics, space exploration and cultural heritage to name a few. Many publications address different issues link to 3D sensing and processing but currently these technologies pose a number of challenges to many recent users, i.e., "what are they, how good are they and how do they compare?". The need to understand, test and integrate those range cameras with other technologies, e.g. photogrammetry, CAD, etc. is driven by the quest for optimal resolution, accuracy, speed and cost. Before investing, users want to be certain that a given range camera satisfy their operational requirements. The understanding of the basic theory and best practices associated with those cameras are in fact fundamental to fulfilling the requirements listed above in an optimal way. This paper addresses the evaluation of active 3D range cameras as part of a study to better understand and select one or a number of them to fulfill the needs of industrial design applications. In particular, object material and surface features effect, calibration and performance evaluation are discussed. Results are given for six different range cameras for close range applications.

  14. Evaluating the performance of close-range 3D active vision systems for industrial design applications

    NASA Astrophysics Data System (ADS)

    Beraldin, J.-Angelo; Gaiani, Marco

    2005-01-01

    In recent years, active three-dimensional (3D) active vision systems or range cameras for short have come out of research laboratories to find niche markets in application fields as diverse as industrial design, automotive manufacturing, geomatics, space exploration and cultural heritage to name a few. Many publications address different issues link to 3D sensing and processing but currently these technologies pose a number of challenges to many recent users, i.e., "what are they, how good are they and how do they compare?". The need to understand, test and integrate those range cameras with other technologies, e.g. photogrammetry, CAD, etc. is driven by the quest for optimal resolution, accuracy, speed and cost. Before investing, users want to be certain that a given range camera satisfy their operational requirements. The understanding of the basic theory and best practices associated with those cameras are in fact fundamental to fulfilling the requirements listed above in an optimal way. This paper addresses the evaluation of active 3D range cameras as part of a study to better understand and select one or a number of them to fulfill the needs of industrial design applications. In particular, object material and surface features effect, calibration and performance evaluation are discussed. Results are given for six different range cameras for close range applications.

  15. Bayesian motion estimation accounts for a surprising bias in 3D vision

    PubMed Central

    Welchman, Andrew E.; Lam, Judith M.; Bülthoff, Heinrich H.

    2008-01-01

    Determining the approach of a moving object is a vital survival skill that depends on the brain combining information about lateral translation and motion-in-depth. Given the importance of sensing motion for obstacle avoidance, it is surprising that humans make errors, reporting an object will miss them when it is on a collision course with their head. Here we provide evidence that biases observed when participants estimate movement in depth result from the brain's use of a “prior” favoring slow velocity. We formulate a Bayesian model for computing 3D motion using independently estimated parameters for the shape of the visual system's slow velocity prior. We demonstrate the success of this model in accounting for human behavior in separate experiments that assess both sensitivity and bias in 3D motion estimation. Our results show that a surprising perceptual error in 3D motion perception reflects the importance of prior probabilities when estimating environmental properties. PMID:18697948

  16. Bayesian motion estimation accounts for a surprising bias in 3D vision.

    PubMed

    Welchman, Andrew E; Lam, Judith M; Bülthoff, Heinrich H

    2008-08-19

    Determining the approach of a moving object is a vital survival skill that depends on the brain combining information about lateral translation and motion-in-depth. Given the importance of sensing motion for obstacle avoidance, it is surprising that humans make errors, reporting an object will miss them when it is on a collision course with their head. Here we provide evidence that biases observed when participants estimate movement in depth result from the brain's use of a "prior" favoring slow velocity. We formulate a Bayesian model for computing 3D motion using independently estimated parameters for the shape of the visual system's slow velocity prior. We demonstrate the success of this model in accounting for human behavior in separate experiments that assess both sensitivity and bias in 3D motion estimation. Our results show that a surprising perceptual error in 3D motion perception reflects the importance of prior probabilities when estimating environmental properties.

  17. Computational model of mesenchymal migration in 3D under chemotaxis.

    PubMed

    Ribeiro, F O; Gómez-Benito, M J; Folgado, J; Fernandes, P R; García-Aznar, J M

    2017-01-01

    Cell chemotaxis is an important characteristic of cellular migration, which takes part in crucial aspects of life and development. In this work, we propose a novel in silico model of mesenchymal 3D migration with competing protrusions under a chemotactic gradient. Based on recent experimental observations, we identify three main stages that can regulate mesenchymal chemotaxis: chemosensing, dendritic protrusion dynamics and cell-matrix interactions. Therefore, each of these features is considered as a different module of the main regulatory computational algorithm. The numerical model was particularized for the case of fibroblast chemotaxis under a PDGF-bb gradient. Fibroblasts migration was simulated embedded in two different 3D matrices - collagen and fibrin - and under several PDGF-bb concentrations. Validation of the model results was provided through qualitative and quantitative comparison with in vitro studies. Our numerical predictions of cell trajectories and speeds were within the measured in vitro ranges in both collagen and fibrin matrices. Although in fibrin, the migration speed of fibroblasts is very low, because fibrin is a stiffer and more entangling matrix. Testing PDGF-bb concentrations, we noticed that an increment of this factor produces a speed increment. At 1 ng mL(-1) a speed peak is reached after which the migration speed diminishes again. Moreover, we observed that fibrin exerts a dampening behavior on migration, significantly affecting the migration efficiency.

  18. Computational model of mesenchymal migration in 3D under chemotaxis

    PubMed Central

    Ribeiro, F. O.; Gómez-Benito, M. J.; Folgado, J.; Fernandes, P. R.; García-Aznar, J. M.

    2017-01-01

    Abstract Cell chemotaxis is an important characteristic of cellular migration, which takes part in crucial aspects of life and development. In this work, we propose a novel in silico model of mesenchymal 3D migration with competing protrusions under a chemotactic gradient. Based on recent experimental observations, we identify three main stages that can regulate mesenchymal chemotaxis: chemosensing, dendritic protrusion dynamics and cell–matrix interactions. Therefore, each of these features is considered as a different module of the main regulatory computational algorithm. The numerical model was particularized for the case of fibroblast chemotaxis under a PDGF-bb gradient. Fibroblasts migration was simulated embedded in two different 3D matrices – collagen and fibrin – and under several PDGF-bb concentrations. Validation of the model results was provided through qualitative and quantitative comparison with in vitro studies. Our numerical predictions of cell trajectories and speeds were within the measured in vitro ranges in both collagen and fibrin matrices. Although in fibrin, the migration speed of fibroblasts is very low, because fibrin is a stiffer and more entangling matrix. Testing PDGF-bb concentrations, we noticed that an increment of this factor produces a speed increment. At 1 ng mL−1 a speed peak is reached after which the migration speed diminishes again. Moreover, we observed that fibrin exerts a dampening behavior on migration, significantly affecting the migration efficiency. PMID:27336322

  19. TOPICAL REVIEW: Computational approaches to 3D modeling of RNA

    NASA Astrophysics Data System (ADS)

    Laing, Christian; Schlick, Tamar

    2010-07-01

    Many exciting discoveries have recently revealed the versatility of RNA and its importance in a variety of functions within the cell. Since the structural features of RNA are of major importance to their biological function, there is much interest in predicting RNA structure, either in free form or in interaction with various ligands, including proteins, metabolites and other molecules. In recent years, an increasing number of researchers have developed novel RNA algorithms for predicting RNA secondary and tertiary structures. In this review, we describe current experimental and computational advances and discuss recent ideas that are transforming the traditional view of RNA folding. To evaluate the performance of the most recent RNA 3D folding algorithms, we provide a comparative study in order to test the performance of available 3D structure prediction algorithms for an RNA data set of 43 structures of various lengths and motifs. We find that the algorithms vary widely in terms of prediction quality across different RNA lengths and topologies; most predictions have very large root mean square deviations from the experimental structure. We conclude by outlining some suggestions for future RNA folding research.

  20. Development and Evaluation of 2-D and 3-D Exocentric Synthetic Vision Navigation Display Concepts for Commercial Aircraft

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, J. J., III; Bailey, Randall E.; Sweeters, Jason L.

    2005-01-01

    NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications that will help to eliminate low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. The paper describes experimental evaluation of a multi-mode 3-D exocentric synthetic vision navigation display concept for commercial aircraft. Experimental results evinced the situation awareness benefits of 2-D and 3-D exocentric synthetic vision displays over traditional 2-D co-planar navigation and vertical situation displays. Conclusions and future research directions are discussed.

  1. Glasses for 3D ultrasound computer tomography: phase compensation

    NASA Astrophysics Data System (ADS)

    Zapf, M.; Hopp, T.; Ruiter, N. V.

    2016-03-01

    Ultrasound Computer Tomography (USCT), developed at KIT, is a promising new imaging system for breast cancer diagnosis, and was successfully tested in a pilot study. The 3D USCT II prototype consists of several hundreds of ultrasound (US) transducers on a semi-ellipsoidal aperture. Spherical waves are sequentially emitted by individual transducers and received in parallel by many transducers. Reflectivity volumes are reconstructed by synthetic aperture focusing (SAFT). However, straight forward SAFT imaging leads to blurred images due to system imperfections. We present an extension of a previously proposed approach to enhance the images. This approach includes additional a priori information and system characteristics. Now spatial phase compensation was included. The approach was evaluated with a simulation and clinical data sets. An increase in the image quality was observed and quantitatively measured by SNR and other metrics.

  2. 3D finite-difference seismic migration with parallel computers

    SciTech Connect

    Ober, C.C.; Gjertsen, R.; Minkoff, S.; Womble, D.E.

    1998-11-01

    The ability to image complex geologies such as salt domes in the Gulf of Mexico and thrusts in mountainous regions is essential for reducing the risk associated with oil exploration. Imaging these structures, however, is computationally expensive as datasets can be terabytes in size. Traditional ray-tracing migration methods cannot handle complex velocity variations commonly found near such salt structures. Instead the authors use the full 3D acoustic wave equation, discretized via a finite difference algorithm. They reduce the cost of solving the apraxial wave equation by a number of numerical techniques including the method of fractional steps and pipelining the tridiagonal solves. The imaging code, Salvo, uses both frequency parallelism (generally 90% efficient) and spatial parallelism (65% efficient). Salvo has been tested on synthetic and real data and produces clear images of the subsurface even beneath complicated salt structures.

  3. Parallelization of ARC3D with Computer-Aided Tools

    NASA Technical Reports Server (NTRS)

    Jin, Haoqiang; Hribar, Michelle; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    A series of efforts have been devoted to investigating methods of porting and parallelizing applications quickly and efficiently for new architectures, such as the SCSI Origin 2000 and Cray T3E. This report presents the parallelization of a CFD application, ARC3D, using the computer-aided tools, Cesspools. Steps of parallelizing this code and requirements of achieving better performance are discussed. The generated parallel version has achieved reasonably well performance, for example, having a speedup of 30 for 36 Cray T3E processors. However, this performance could not be obtained without modification of the original serial code. It is suggested that in many cases improving serial code and performing necessary code transformations are important parts for the automated parallelization process although user intervention in many of these parts are still necessary. Nevertheless, development and improvement of useful software tools, such as Cesspools, can help trim down many tedious parallelization details and improve the processing efficiency.

  4. 3D Scene Reconstruction Using Omnidirectional Vision and LiDAR: A Hybrid Approach

    PubMed Central

    Vlaminck, Michiel; Luong, Hiep; Goeman, Werner; Philips, Wilfried

    2016-01-01

    In this paper, we propose a novel approach to obtain accurate 3D reconstructions of large-scale environments by means of a mobile acquisition platform. The system incorporates a Velodyne LiDAR scanner, as well as a Point Grey Ladybug panoramic camera system. It was designed with genericity in mind, and hence, it does not make any assumption about the scene or about the sensor set-up. The main novelty of this work is that the proposed LiDAR mapping approach deals explicitly with the inhomogeneous density of point clouds produced by LiDAR scanners. To this end, we keep track of a global 3D map of the environment, which is continuously improved and refined by means of a surface reconstruction technique. Moreover, we perform surface analysis on consecutive generated point clouds in order to assure a perfect alignment with the global 3D map. In order to cope with drift, the system incorporates loop closure by determining the pose error and propagating it back in the pose graph. Our algorithm was exhaustively tested on data captured at a conference building, a university campus and an industrial site of a chemical company. Experiments demonstrate that it is capable of generating highly accurate 3D maps in very challenging environments. We can state that the average distance of corresponding point pairs between the ground truth and estimated point cloud approximates one centimeter for an area covering approximately 4000 m2. To prove the genericity of the system, it was tested on the well-known Kitti vision benchmark. The results show that our approach competes with state of the art methods without making any additional assumptions. PMID:27854315

  5. 3D Scene Reconstruction Using Omnidirectional Vision and LiDAR: A Hybrid Approach.

    PubMed

    Vlaminck, Michiel; Luong, Hiep; Goeman, Werner; Philips, Wilfried

    2016-11-16

    In this paper, we propose a novel approach to obtain accurate 3D reconstructions of large-scale environments by means of a mobile acquisition platform. The system incorporates a Velodyne LiDAR scanner, as well as a Point Grey Ladybug panoramic camera system. It was designed with genericity in mind, and hence, it does not make any assumption about the scene or about the sensor set-up. The main novelty of this work is that the proposed LiDAR mapping approach deals explicitly with the inhomogeneous density of point clouds produced by LiDAR scanners. To this end, we keep track of a global 3D map of the environment, which is continuously improved and refined by means of a surface reconstruction technique. Moreover, we perform surface analysis on consecutive generated point clouds in order to assure a perfect alignment with the global 3D map. In order to cope with drift, the system incorporates loop closure by determining the pose error and propagating it back in the pose graph. Our algorithm was exhaustively tested on data captured at a conference building, a university campus and an industrial site of a chemical company. Experiments demonstrate that it is capable of generating highly accurate 3D maps in very challenging environments. We can state that the average distance of corresponding point pairs between the ground truth and estimated point cloud approximates one centimeter for an area covering approximately 4000 m 2 . To prove the genericity of the system, it was tested on the well-known Kitti vision benchmark. The results show that our approach competes with state of the art methods without making any additional assumptions.

  6. Parallel Algorithms for Computer Vision

    DTIC Science & Technology

    1990-04-01

    Tikhonov and V. Y. Arsenin . Solutions of Ill - posed Problems . W.H.Winston, Washington, D.C., 1977 . [661 V. Torre... Ill - posed problems in early vision. Proceedings of the IEEE, 76:869-889, 1988. [4] J. Besag. Spatial interaction and the statistical analysis of lattice...Cooperative computation of stereo disparity. Science, 194:283-287, 1976. [50] J. Marroquin, S. Mitter, and T. Poggio. Probabilistic solution of ill - posed

  7. Understanding and preventing computer vision syndrome.

    PubMed

    Loh, Ky; Redd, Sc

    2008-01-01

    The invention of computer and advancement in information technology has revolutionized and benefited the society but at the same time has caused symptoms related to its usage such as ocular sprain, irritation, redness, dryness, blurred vision and double vision. This cluster of symptoms is known as computer vision syndrome which is characterized by the visual symptoms which result from interaction with computer display or its environment. Three major mechanisms that lead to computer vision syndrome are extraocular mechanism, accommodative mechanism and ocular surface mechanism. The visual effects of the computer such as brightness, resolution, glare and quality all are known factors that contribute to computer vision syndrome. Prevention is the most important strategy in managing computer vision syndrome. Modification in the ergonomics of the working environment, patient education and proper eye care are crucial in managing computer vision syndrome.

  8. Performance evaluation of 3D vision-based semi-autonomous control method for assistive robotic manipulator.

    PubMed

    Ka, Hyun W; Chung, Cheng-Shiu; Ding, Dan; James, Khara; Cooper, Rory

    2017-03-22

    We developed a 3D vision-based semi-autonomous control interface for assistive robotic manipulators. It was implemented based on one of the most popular commercially available assistive robotic manipulator combined with a low-cost depth-sensing camera mounted on the robot base. To perform a manipulation task with the 3D vision-based semi-autonomous control interface, a user starts operating with a manual control method available to him/her. When detecting objects within a set range, the control interface automatically stops the robot, and provides the user with possible manipulation options through audible text output, based on the detected object characteristics. Then, the system waits until the user states a voice command. Once the user command is given, the control interface drives the robot autonomously until the given command is completed. In the empirical evaluations conducted with human subjects from two different groups, it was shown that the semi-autonomous control can be used as an alternative control method to enable individuals with impaired motor control to more efficiently operate the robot arms by facilitating their fine motion control. The advantage of semi-autonomous control was not so obvious for the simple tasks. But, for the relatively complex real-life tasks, the 3D vision-based semi-autonomous control showed significantly faster performance. Implications for Rehabilitation A 3D vision-based semi-autonomous control interface will improve clinical practice by providing an alternative control method that is less demanding physically as well cognitively. A 3D vision-based semi-autonomous control provides the user with task specific intelligent semiautonomous manipulation assistances. A 3D vision-based semi-autonomous control gives the user the feeling that he or she is still in control at any moment. A 3D vision-based semi-autonomous control is compatible with different types of new and existing manual control methods for ARMs.

  9. Computational and methodological developments towards 3D full waveform inversion

    NASA Astrophysics Data System (ADS)

    Etienne, V.; Virieux, J.; Hu, G.; Jia, Y.; Operto, S.

    2010-12-01

    Full waveform inversion (FWI) is one of the most promising techniques for seismic imaging. It relies on a formalism taking into account every piece of information contained in the seismic data as opposed to more classical techniques such as travel time tomography. As a result, FWI is a high resolution imaging process able to reach a spatial accuracy equal to half a wavelength. FWI is based on a local optimization scheme and therefore the main limitation concerns the starting model which has to be closed enough to the real one in order to converge to the global minimum. Another counterpart of FWI is the required computational resources when considering models and frequencies of interest. The task becomes even more tremendous when one tends to perform the inversion using the elastic equation instead of using the acoustic approximation. This is the reason why until recently most studies were limited to 2D cases. In the last few years, due to the increase of the available computational power, FWI has focused a lot of interests and continuous efforts towards inversion of 3D models, leading to remarkable applications up to the continental scale. We investigate the computational burden induced by FWI in 3D elastic media and propose some strategic features leading to the reduction of the numerical cost while providing a great flexibility in the inversion parametrization. First, in order to release the memory requirements, we developed our FWI algorithm in the frequency domain and take benefit of the wave-number redundancy in the seismic data to process a quite reduced number of frequencies. To do so, we extract frequency solutions from time marching techniques which are efficient for 3D structures. Moreover, this frequency approach permits a multi-resolution strategy by proceeding from low to high frequencies: the final model at one frequency is used as the starting model for the next frequency. This procedure overcomes partially the non-linear behavior of the inversion

  10. Protein 3D Structure Computed from Evolutionary Sequence Variation

    PubMed Central

    Sheridan, Robert; Hopf, Thomas A.; Pagnani, Andrea; Zecchina, Riccardo; Sander, Chris

    2011-01-01

    The evolutionary trajectory of a protein through sequence space is constrained by its function. Collections of sequence homologs record the outcomes of millions of evolutionary experiments in which the protein evolves according to these constraints. Deciphering the evolutionary record held in these sequences and exploiting it for predictive and engineering purposes presents a formidable challenge. The potential benefit of solving this challenge is amplified by the advent of inexpensive high-throughput genomic sequencing. In this paper we ask whether we can infer evolutionary constraints from a set of sequence homologs of a protein. The challenge is to distinguish true co-evolution couplings from the noisy set of observed correlations. We address this challenge using a maximum entropy model of the protein sequence, constrained by the statistics of the multiple sequence alignment, to infer residue pair couplings. Surprisingly, we find that the strength of these inferred couplings is an excellent predictor of residue-residue proximity in folded structures. Indeed, the top-scoring residue couplings are sufficiently accurate and well-distributed to define the 3D protein fold with remarkable accuracy. We quantify this observation by computing, from sequence alone, all-atom 3D structures of fifteen test proteins from different fold classes, ranging in size from 50 to 260 residues., including a G-protein coupled receptor. These blinded inferences are de novo, i.e., they do not use homology modeling or sequence-similar fragments from known structures. The co-evolution signals provide sufficient information to determine accurate 3D protein structure to 2.7–4.8 Å Cα-RMSD error relative to the observed structure, over at least two-thirds of the protein (method called EVfold, details at http://EVfold.org). This discovery provides insight into essential interactions constraining protein evolution and will facilitate a comprehensive survey of the universe of protein

  11. Protein 3D structure computed from evolutionary sequence variation.

    PubMed

    Marks, Debora S; Colwell, Lucy J; Sheridan, Robert; Hopf, Thomas A; Pagnani, Andrea; Zecchina, Riccardo; Sander, Chris

    2011-01-01

    The evolutionary trajectory of a protein through sequence space is constrained by its function. Collections of sequence homologs record the outcomes of millions of evolutionary experiments in which the protein evolves according to these constraints. Deciphering the evolutionary record held in these sequences and exploiting it for predictive and engineering purposes presents a formidable challenge. The potential benefit of solving this challenge is amplified by the advent of inexpensive high-throughput genomic sequencing.In this paper we ask whether we can infer evolutionary constraints from a set of sequence homologs of a protein. The challenge is to distinguish true co-evolution couplings from the noisy set of observed correlations. We address this challenge using a maximum entropy model of the protein sequence, constrained by the statistics of the multiple sequence alignment, to infer residue pair couplings. Surprisingly, we find that the strength of these inferred couplings is an excellent predictor of residue-residue proximity in folded structures. Indeed, the top-scoring residue couplings are sufficiently accurate and well-distributed to define the 3D protein fold with remarkable accuracy.We quantify this observation by computing, from sequence alone, all-atom 3D structures of fifteen test proteins from different fold classes, ranging in size from 50 to 260 residues, including a G-protein coupled receptor. These blinded inferences are de novo, i.e., they do not use homology modeling or sequence-similar fragments from known structures. The co-evolution signals provide sufficient information to determine accurate 3D protein structure to 2.7-4.8 Å C(α)-RMSD error relative to the observed structure, over at least two-thirds of the protein (method called EVfold, details at http://EVfold.org). This discovery provides insight into essential interactions constraining protein evolution and will facilitate a comprehensive survey of the universe of protein structures

  12. Applications of 3D orbital computer-assisted surgery (CAS).

    PubMed

    Scolozzi, P

    2017-09-01

    The purpose of the present report is to describe the indications for use of 3D orbital computer-assisted surgery (CAS). We analyzed the clinical and radiological data of all patients with orbital deformities treated using intra-operative navigation and CAD/CAM techniques at the Hôpitaux Universitaires de Genève, Switzerland, between 2009 and 2016. We recorded age and gender, orbital deformity, technical and surgical procedure and postoperative complications. One hundred and three patients were included. Mean age was 39.5years (range, 5 to 84years) and 85 (87.5%) were men. Of the 103 patients, 96 had intra-operative navigation (34 for primary and 3 for secondary orbito-zygomatic fractures, 15 for Le Fort fractures, 16 for orbital floor fractures, 10 for combined orbital floor and medial wall fractures, 7 for orbital medial wall fractures, 3 for NOE (naso-orbito-ethmoidal) fractures, 2 for isolated comminuted zygomatic arch fractures, 1 for enophthalmos, 3 for TMJ ankylosis and 2 for fibrous dysplasia bone recontouring), 8 patients had CAD/CAM PEEK-PSI for correction of residual orbital bone contour following craniomaxillofacial trauma, and 1 patient had CAD/CAM surgical splints and cutting guides for correction of orbital hypertelorism. Two patient (1.9%) required revision surgery for readjustment of an orbital mesh. The 1-year follow-up examination showed stable cosmetic and dimensional results in all patients. This study demonstrated that the application of 3D orbital CAS with regards to intra-operative navigation and CAD/CAM techniques allowed for a successful outcome in the patients presented in this series. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  13. 3D Computer aided treatment planning in endodontics.

    PubMed

    van der Meer, Wicher J; Vissink, Arjan; Ng, Yuan Ling; Gulabivala, Kishor

    2016-02-01

    Obliteration of the root canal system due to accelerated dentinogenesis and dystrophic calcification can challenge the achievement of root canal treatment goals. This paper describes the application of 3D digital mapping technology for predictable navigation of obliterated canal systems during root canal treatment to avoid iatrogenic damage of the root. Digital endodontic treatment planning for anterior teeth with severely obliterated root canal systems was accomplished with the aid of computer software, based on cone beam computer tomography (CBCT) scans and intra-oral scans of the dentition. On the basis of these scans, endodontic guides were created for the planned treatment through digital designing and rapid prototyping fabrication. The custom-made guides allowed for an uncomplicated and predictable canal location and management. The method of digital designing and rapid prototyping of endodontic guides allows for reliable and predictable location of root canals of teeth with calcifically metamorphosed root canal systems. The endodontic directional guide facilitates difficult endodontic treatments at little additional cost. Copyright © 2016. Published by Elsevier Ltd.

  14. Identification and Detection of Simple 3D Objects with Severely Blurred Vision

    PubMed Central

    Kallie, Christopher S.; Legge, Gordon E.; Yu, Deyue

    2012-01-01

    Purpose. Detecting and recognizing three-dimensional (3D) objects is an important component of the visual accessibility of public spaces for people with impaired vision. The present study investigated the impact of environmental factors and object properties on the recognition of objects by subjects who viewed physical objects with severely reduced acuity. Methods. The experiment was conducted in an indoor testing space. We examined detection and identification of simple convex objects by normally sighted subjects wearing diffusing goggles that reduced effective acuity to 20/900. We used psychophysical methods to examine the effect on performance of important environmental variables: viewing distance (from 10–24 feet, or 3.05–7.32 m) and illumination (overhead fluorescent and artificial window), and object variables: shape (boxes and cylinders), size (heights from 2–6 feet, or 0.61–1.83 m), and color (gray and white). Results. Object identification was significantly affected by distance, color, height, and shape, as well as interactions between illumination, color, and shape. A stepwise regression analysis showed that 64% of the variability in identification could be explained by object contrast values (58%) and object visual angle (6%). Conclusions. When acuity is severely limited, illumination, distance, color, height, and shape influence the identification and detection of simple 3D objects. These effects can be explained in large part by the impact of these variables on object contrast and visual angle. Basic design principles for improving object visibility are discussed. PMID:23111613

  15. A flexible 3D vision system based on structured light for in-line product inspection

    NASA Astrophysics Data System (ADS)

    Skotheim, Øystein; Nygaard, Jens Olav; Thielemann, Jens; Vollset, Thor

    2008-02-01

    A flexible and highly configurable 3D vision system targeted for in-line product inspection is presented. The system includes a low cost 3D camera based on structured light and a set of flexible software tools that automate the measurement process. The specification of the measurement tasks is done in a first manual step. The user selects regions of the point cloud to analyze and specifies primitives to be characterized within these regions. After all measurement tasks have been specified, measurements can be carried out on successive parts automatically and without supervision. As a test case, a measurement cell for inspection of a V-shaped car component has been developed. The car component consists of two steel tubes attached to a central hub. Each of the tubes has an additional bushing clamped to its end. A measurement is performed in a few seconds and results in an ordered point cloud with 1.2 million points. The software is configured to fit cylinders to each of the steel tubes as well as to the inside of the bushings of the car part. The size, position and orientation of the fitted cylinders allow us to measure and verify a series of dimensions specified on the CAD drawing of the component with sub-millimetre accuracy.

  16. Perception of 3-D location based on vision, touch, and extended touch.

    PubMed

    Giudice, Nicholas A; Klatzky, Roberta L; Bennett, Christopher R; Loomis, Jack M

    2013-01-01

    Perception of the near environment gives rise to spatial images in working memory that continue to represent the spatial layout even after cessation of sensory input. As the observer moves, these spatial images are continuously updated. This research is concerned with (1) whether spatial images of targets are formed when they are sensed using extended touch (i.e., using a probe to extend the reach of the arm) and (2) the accuracy with which such targets are perceived. In Experiment 1, participants perceived the 3-D locations of individual targets from a fixed origin and were then tested with an updating task involving blindfolded walking followed by placement of the hand at the remembered target location. Twenty-four target locations, representing all combinations of two distances, two heights, and six azimuths, were perceived by vision or by blindfolded exploration with the bare hand, a 1-m probe, or a 2-m probe. Systematic errors in azimuth were observed for all targets, reflecting errors in representing the target locations and updating. Overall, updating after visual perception was best, but the quantitative differences between conditions were small. Experiment 2 demonstrated that auditory information signifying contact with the target was not a factor. Overall, the results indicate that 3-D spatial images can be formed of targets sensed by extended touch and that perception by extended touch, even out to 1.75 m, is surprisingly accurate.

  17. Vision-based augmented reality computer assisted surgery navigation system

    NASA Astrophysics Data System (ADS)

    Sun, Lei; Chen, Xin; Xu, Kebin; Li, Xin; Xu, Wei

    2007-12-01

    A vision-based Augmented Reality computer assisted surgery navigation system is presented in this paper. It applies the Augmented Reality technique to surgery navigation system, so the surgeon's vision of the real world is enhanced. In the system, the camera calibration is adopted to calculate the cameras projection matrix, and then make the virtual-real registration by using the transformation relation. The merging of synthetic 3D information into user's vision is realized by texture technique. The experiment results demonstrate the feasibility of the system we have designed.

  18. 3D Vectorial Time Domain Computational Integrated Photonics

    SciTech Connect

    Kallman, J S; Bond, T C; Koning, J M; Stowell, M L

    2007-02-16

    The design of integrated photonic structures poses considerable challenges. 3D-Time-Domain design tools are fundamental in enabling technologies such as all-optical logic, photonic bandgap sensors, THz imaging, and fast radiation diagnostics. Such technologies are essential to LLNL and WFO sponsors for a broad range of applications: encryption for communications and surveillance sensors (NSA, NAI and IDIV/PAT); high density optical interconnects for high-performance computing (ASCI); high-bandwidth instrumentation for NIF diagnostics; micro-sensor development for weapon miniaturization within the Stockpile Stewardship and DNT programs; and applications within HSO for CBNP detection devices. While there exist a number of photonics simulation tools on the market, they primarily model devices of interest to the communications industry. We saw the need to extend our previous software to match the Laboratory's unique emerging needs. These include modeling novel material effects (such as those of radiation induced carrier concentrations on refractive index) and device configurations (RadTracker bulk optics with radiation induced details, Optical Logic edge emitting lasers with lateral optical inputs). In addition we foresaw significant advantages to expanding our own internal simulation codes: parallel supercomputing could be incorporated from the start, and the simulation source code would be accessible for modification and extension. This work addressed Engineering's Simulation Technology Focus Area, specifically photonics. Problems addressed from the Engineering roadmap of the time included modeling the Auston switch (an important THz source/receiver), modeling Vertical Cavity Surface Emitting Lasers (VCSELs, which had been envisioned as part of fast radiation sensors), and multi-scale modeling of optical systems (for a variety of applications). We proposed to develop novel techniques to numerically solve the 3D multi-scale propagation problem for both the microchip

  19. Computer acquisition of 3D images utilizing dynamic speckles

    NASA Astrophysics Data System (ADS)

    Kamshilin, Alexei A.; Semenov, Dmitry V.; Nippolainen, Ervin; Raita, Erik

    2006-05-01

    We present novel technique for fast non-contact and continuous profile measurements of rough surfaces by use of dynamic speckles. The dynamic speckle pattern is generated when the laser beam scans the surface under study. The most impressive feature of the proposed technique is its ability to work at extremely high scanning speed of hundreds meters per second. The technique is based on the continuous frequency measurements of the light-power modulation after spatial filtering of the scattered light. The complete optical-electronic system was designed and fabricated for fast measurement of the speckles velocity, its recalculation into the distance, and further data acquisition into computer. The measured surface profile is displayed in a PC monitor in real time. The response time of the measuring system is below 1 μs. Important parameters of the system such as accuracy, range of measurements, and spatial resolution are analyzed. Limits of the spatial filtering technique used for continuous tracking of the speckle-pattern velocity are shown. Possible ways of further improvement of the measurements accuracy are demonstrated. Owing to its extremely fast operation, the proposed technique could be applied for online control of the 3D-shape of complex objects (e.g., electronic circuits) during their assembling.

  20. Computation of 3D queries for ROCS based virtual screens.

    PubMed

    Tawa, Gregory J; Baber, J Christian; Humblet, Christine

    2009-12-01

    Rapid overlay of chemical structures (ROCS) is a method that aligns molecules based on shape and/or chemical similarity. It is often used in 3D ligand-based virtual screening. Given a query consisting of a single conformation of an active molecule ROCS can generate highly enriched hit lists. Typically the chosen query conformation is a minimum energy structure. Can better enrichment be obtained using conformations other than the minimum energy structure? To answer this question a methodology has been developed called CORAL (COnformational analysis, Rocs ALignment). For a given set of molecule conformations it computes optimized conformations for ROCS screening. It does so by clustering all conformations of a chosen molecule set using pairwise ROCS combo scores. The best representative conformation is that which has the highest average overlap with the rest of the conformations in the cluster. It is these best representative conformations that are then used for virtual screening. CORAL was tested by performing virtual screening experiments with the 40 DUD (Directory of Useful Decoys) data sets. Both CORAL and minimum energy queries were used. The recognition capability of each query was quantified as the area under the ROC curve (AUC). Results show that the CORAL AUC values are on average larger than the minimum energy AUC values. This demonstrates that one can indeed obtain better ROCS enrichments with conformations other than the minimum energy structure. As a result, CORAL analysis can be a valuable first step in virtual screening workflows using ROCS.

  1. Computation of 3D queries for ROCS based virtual screens

    NASA Astrophysics Data System (ADS)

    Tawa, Gregory J.; Baber, J. Christian; Humblet, Christine

    2009-12-01

    Rapid overlay of chemical structures (ROCS) is a method that aligns molecules based on shape and/or chemical similarity. It is often used in 3D ligand-based virtual screening. Given a query consisting of a single conformation of an active molecule ROCS can generate highly enriched hit lists. Typically the chosen query conformation is a minimum energy structure. Can better enrichment be obtained using conformations other than the minimum energy structure? To answer this question a methodology has been developed called CORAL (COnformational analysis, Rocs ALignment). For a given set of molecule conformations it computes optimized conformations for ROCS screening. It does so by clustering all conformations of a chosen molecule set using pairwise ROCS combo scores. The best representative conformation is that which has the highest average overlap with the rest of the conformations in the cluster. It is these best representative conformations that are then used for virtual screening. CORAL was tested by performing virtual screening experiments with the 40 DUD (Directory of Useful Decoys) data sets. Both CORAL and minimum energy queries were used. The recognition capability of each query was quantified as the area under the ROC curve (AUC). Results show that the CORAL AUC values are on average larger than the minimum energy AUC values. This demonstrates that one can indeed obtain better ROCS enrichments with conformations other than the minimum energy structure. As a result, CORAL analysis can be a valuable first step in virtual screening workflows using ROCS.

  2. 3D reconstruction from a monocular vision system for unmanned ground vehicles

    NASA Astrophysics Data System (ADS)

    Tompkins, R. Cortland; Diskin, Yakov; Youssef, Menatoallah M.; Asari, Vijayan K.

    2011-11-01

    In this paper we present a 3D reconstruction technique designed to support an autonomously navigated unmanned system. The algorithm and methods presented focus on the 3D reconstruction of a scene, with color and distance information, using only a single moving camera. In this way, the system may provide positional self-awareness for navigation within a known, GPS-denied area. It can also be used to construct a new model of unknown areas. Existing 3D reconstruction methods for GPS-denied areas often rely on expensive inertial measurement units to establish camera location and orientation. The algorithm proposed---after the preprocessing tasks of stabilization and video enhancement---performs Speeded-Up Robust Feature extraction, in which we locate unique stable points within every frame. Additional features are extracted using an optical flow method, with the resultant points fused and pruned based on several quality metrics. Each unique point is then tracked through the video sequence and assigned a disparity value used to compute the depth for each feature within the scene. The algorithm also assigns each feature point a horizontal and vertical coordinate using the camera's field of views specifications. From this, a resultant point cloud consists of thousands of feature points plotted from a particular camera position and direction, generated from pairs of sequential frames. The proposed method can use the yaw, pitch and roll information calculated from visual cues within the image data to accurately compute location and orientation. This positioning information enables the reconstruction of a robust 3D model particularly suitable for autonomous navigation and mapping tasks.

  3. Multilevel Relaxation in Low Level Computer Vision.

    DTIC Science & Technology

    1982-01-01

    Lab, Cambridge,MA, June,1981. HR80 Hanson,A. and Riseman,E.M., Processing Cones: A Computational Structure for Image Analysis, In: Structured Computer Vision...Klinger,A. (Editors), Structured Computer Vision: Machine Perception through Hierarchical Computation Structures, Academic Press, New York, 1980. TP75

  4. Computer vision syndrome: A review.

    PubMed

    Gowrisankaran, Sowjanya; Sheedy, James E

    2015-01-01

    Computer vision syndrome (CVS) is a collection of symptoms related to prolonged work at a computer display. This article reviews the current knowledge about the symptoms, related factors and treatment modalities for CVS. Relevant literature on CVS published during the past 65 years was analyzed. Symptoms reported by computer users are classified into internal ocular symptoms (strain and ache), external ocular symptoms (dryness, irritation, burning), visual symptoms (blur, double vision) and musculoskeletal symptoms (neck and shoulder pain). The major factors associated with CVS are either environmental (improper lighting, display position and viewing distance) and/or dependent on the user's visual abilities (uncorrected refractive error, oculomotor disorders and tear film abnormalities). Although the factors associated with CVS have been identified the physiological mechanisms that underlie CVS are not completely understood. Additionally, advances in technology have led to the increased use of hand-held devices, which might impose somewhat different visual challenges compared to desktop displays. Further research is required to better understand the physiological mechanisms underlying CVS and symptoms associated with the use of hand-held and stereoscopic displays.

  5. A review of automated image understanding within 3D baggage computed tomography security screening.

    PubMed

    Mouton, Andre; Breckon, Toby P

    2015-01-01

    Baggage inspection is the principal safeguard against the transportation of prohibited and potentially dangerous materials at airport security checkpoints. Although traditionally performed by 2D X-ray based scanning, increasingly stringent security regulations have led to a growing demand for more advanced imaging technologies. The role of X-ray Computed Tomography is thus rapidly expanding beyond the traditional materials-based detection of explosives. The development of computer vision and image processing techniques for the automated understanding of 3D baggage-CT imagery is however, complicated by poor image resolutions, image clutter and high levels of noise and artefacts. We discuss the recent and most pertinent advancements and identify topics for future research within the challenging domain of automated image understanding for baggage security screening CT.

  6. 3D localization of a labeled target by means of a stereo vision configuration with subvoxel resolution.

    PubMed

    Arias H, Néstor A; Sandoz, Patrick; Meneses, Jaime E; Suarez, Miguel A; Gharbi, Tijani

    2010-11-08

    We present a method for the visual measurement of the 3D position and orientation of a moving target. Three dimensional sensing is based on stereo vision while high resolution results from a pseudo-periodic pattern (PPP) fixed onto the target. The PPP is suited for optimizing image processing that is based on phase computations. We describe experimental setup, image processing and system calibration. Resolutions reported are in the micrometer range for target position (x,y,z) and of 5:3x10(-4) rad: for target orientation (θx,θy,θz). These performances have to be appreciated with respect to the vision system used. The latter makes that every image pixel corresponds to an actual distance of 0:3x0:3 mm2 on the target while the PPP is made of elementary dots of 1 mm with a period of 2 mm. Target tilts as large as π=4 are allowed with respect to the Z axis of the system.

  7. Benchmarking neuromorphic vision: lessons learnt from computer vision

    PubMed Central

    Tan, Cheston; Lallee, Stephane; Orchard, Garrick

    2015-01-01

    Neuromorphic Vision sensors have improved greatly since the first silicon retina was presented almost three decades ago. They have recently matured to the point where they are commercially available and can be operated by laymen. However, despite improved availability of sensors, there remains a lack of good datasets, while algorithms for processing spike-based visual data are still in their infancy. On the other hand, frame-based computer vision algorithms are far more mature, thanks in part to widely accepted datasets which allow direct comparison between algorithms and encourage competition. We are presented with a unique opportunity to shape the development of Neuromorphic Vision benchmarks and challenges by leveraging what has been learnt from the use of datasets in frame-based computer vision. Taking advantage of this opportunity, in this paper we review the role that benchmarks and challenges have played in the advancement of frame-based computer vision, and suggest guidelines for the creation of Neuromorphic Vision benchmarks and challenges. We also discuss the unique challenges faced when benchmarking Neuromorphic Vision algorithms, particularly when attempting to provide direct comparison with frame-based computer vision. PMID:26528120

  8. Error analysis and system implementation for structured light stereo vision 3D geometric detection in large scale condition

    NASA Astrophysics Data System (ADS)

    Qi, Li; Zhang, Xuping; Wang, Jiaqi; Zhang, Yixin; Wang, Shun; Zhu, Fan

    2012-11-01

    Stereo vision based 3D metrology technique is an effective approach for relatively large scale object's 3D geometric detection. In this paper, we present a specified image capture system, which implements LVDS interface embedded CMOS sensor and CAN bus to ensure synchronous trigger and exposure. We made an error analysis for structured light vision measurement in large scale condition, based on which we built and tested the system prototype both indoor and outfield. The result shows that the system is very suitable for large scale metrology applications.

  9. Computer Simulation Of Dolphin Vision

    NASA Astrophysics Data System (ADS)

    Rivamonte, Andre; Dral, A. D.

    1988-08-01

    Improvements in the video display of personal computers have reached a level of spatial and intensity resolution that allows realistic simulation of animal image processing. An IBM PC with standard VGA graphics is capable of providing the computing power to support a visual acuity study from 1) formulation of the optical/neurological model, 2) acquistion/ analysis of data to 3) simulation of the perceived photic environment. The hardware, software and behavioral data required to "see" a scene degraded/enhanced by the illumination, distance, intervening viewing medium, optical train, retinal mosaic and neural processing are discussed. A model for the optics of the dolphin eye is reviewed and a model of the dolphin retina is presented. This comprehensive description of dolphin vision is integrated into our knowledge of other mammalian visual systems.

  10. Quaternions in computer vision and robotics

    SciTech Connect

    Pervin, E.; Webb, J.A.

    1982-01-01

    Computer vision and robotics suffer from not having good tools for manipulating three-dimensional objects. Vectors, coordinate geometry, and trigonometry all have deficiencies. Quaternions can be used to solve many of these problems. Many properties of quaternions that are relevant to computer vision and robotics are developed. Examples are given showing how quaternions can be used to simplify derivations in computer vision and robotics.

  11. Fusion of Airborne and Terrestrial Image-Based 3d Modelling for Road Infrastructure Management - Vision and First Experiments

    NASA Astrophysics Data System (ADS)

    Nebiker, S.; Cavegn, S.; Eugster, H.; Laemmer, K.; Markram, J.; Wagner, R.

    2012-07-01

    In this paper we present the vision and proof of concept of a seamless image-based 3d modelling approach fusing airborne and mobile terrestrial imagery. The proposed fusion relies on dense stereo matching for extracting 3d point clouds which - in combination with the original airborne and terrestrial stereo imagery - create a rich 3d geoinformation and 3d measuring space. For the seamless exploitation of this space we propose using a new virtual globe technology integrating the airborne and terrestrial stereoscopic imagery with the derived 3d point clouds. The concept is applied to road and road infrastructure management and evaluated in a highway mapping project combining stereovision based mobile mapping with high-resolution multispectral airborne road corridor mapping using the new Leica RCD30 sensor.

  12. Modeling Computer Communication Networks in a Realistic 3D Environment

    DTIC Science & Technology

    2010-03-01

    visualization in OPNET . . . . . . . . . . . . 13 6. Sample NetViz visualization . . . . . . . . . . . . . . . . . . . 15 7. Realistic 3D terrains...scenario in OPNET . . . 19 10. OPNET 3DNV only displays connectivity . . . . . . . . . . . . 29 11. The digitally connected battlefield...confirmation tool 12 OPNET Optimized Network Evaluation Tool . . . . . . . . . . . . 13 NetViz Network Visualization

  13. Machine learning-based 3-D geometry reconstruction and modeling of aortic valve deformation using 3-D computed tomography images.

    PubMed

    Liang, Liang; Kong, Fanwei; Martin, Caitlin; Pham, Thuy; Wang, Qian; Duncan, James; Sun, Wei

    2017-05-01

    To conduct a patient-specific computational modeling of the aortic valve, 3-D aortic valve anatomic geometries of an individual patient need to be reconstructed from clinical 3-D cardiac images. Currently, most of computational studies involve manual heart valve geometry reconstruction and manual finite element (FE) model generation, which is both time-consuming and prone to human errors. A seamless computational modeling framework, which can automate this process based on machine learning algorithms, is desirable, as it can not only eliminate human errors and ensure the consistency of the modeling results but also allow fast feedback to clinicians and permits a future population-based probabilistic analysis of large patient cohorts. In this study, we developed a novel computational modeling method to automatically reconstruct the 3-D geometries of the aortic valve from computed tomographic images. The reconstructed valve geometries have built-in mesh correspondence, which bridges harmonically for the consequent FE modeling. The proposed method was evaluated by comparing the reconstructed geometries from 10 patients with those manually created by human experts, and a mean discrepancy of 0.69 mm was obtained. Based on these reconstructed geometries, FE models of valve leaflets were developed, and aortic valve closure from end systole to middiastole was simulated for 7 patients and validated by comparing the deformed geometries with those manually created by human experts, and a mean discrepancy of 1.57 mm was obtained. The proposed method offers great potential to streamline the computational modeling process and enables the development of a preoperative planning system for aortic valve disease diagnosis and treatment. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Fast and flexible 3D object recognition solutions for machine vision applications

    NASA Astrophysics Data System (ADS)

    Effenberger, Ira; Kühnle, Jens; Verl, Alexander

    2013-03-01

    In automation and handling engineering, supplying work pieces between different stages along the production process chain is of special interest. Often the parts are stored unordered in bins or lattice boxes and hence have to be separated and ordered for feeding purposes. An alternative to complex and spacious mechanical systems such as bowl feeders or conveyor belts, which are typically adapted to the parts' geometry, is using a robot to grip the work pieces out of a bin or from a belt. Such applications are in need of reliable and precise computer-aided object detection and localization systems. For a restricted range of parts, there exists a variety of 2D image processing algorithms that solve the recognition problem. However, these methods are often not well suited for the localization of randomly stored parts. In this paper we present a fast and flexible 3D object recognizer that localizes objects by identifying primitive features within the objects. Since technical work pieces typically consist to a substantial degree of geometric primitives such as planes, cylinders and cones, such features usually carry enough information in order to determine the position of the entire object. Our algorithms use 3D best-fitting combined with an intelligent data pre-processing step. The capability and performance of this approach is shown by applying the algorithms to real data sets of different industrial test parts in a prototypical bin picking demonstration system.

  15. Computed 3D visualisation of an extinct cephalopod using computer tomographs

    PubMed Central

    Lukeneder, Alexander

    2012-01-01

    The first 3D visualisation of a heteromorph cephalopod species from the Southern Alps (Dolomites, northern Italy) is presented. Computed tomography, palaeontological data and 3D reconstructions were included in the production of a movie, which shows a life reconstruction of the extinct organism. This detailed reconstruction is according to the current knowledge of the shape and mode of life as well as habitat of this animal. The results are based on the most complete shell known thus far of the genus Dissimilites. Object-based combined analyses from computed tomography and various computed 3D facility programmes help to understand morphological details as well as their ontogentical changes in fossil material. In this study, an additional goal was to show changes in locomotion during different ontogenetic phases of such fossil, marine shell-bearing animals (ammonoids). Hence, the presented models and tools can serve as starting points for discussions on morphology and locomotion of extinct cephalopods in general, and of the genus Dissimilites in particular. The heteromorph ammonoid genus Dissimilites is interpreted here as an active swimmer of the Tethyan Ocean. This study portrays non-destructive methods of 3D visualisation applied on palaeontological material, starting with computed tomography resulting in animated, high-quality video clips. The here presented 3D geometrical models and animation, which are based on palaeontological material, demonstrate the wide range of applications, analytical techniques and also outline possible limitations of 3D models in earth sciences and palaeontology. The realistic 3D models and motion pictures can easily be shared amongst palaeontologists. Data, images and short clips can be discussed online and, if necessary, adapted in morphological details and motion-style to better represent the cephalopod animal. PMID:24850976

  16. Computed 3D visualisation of an extinct cephalopod using computer tomographs

    NASA Astrophysics Data System (ADS)

    Lukeneder, Alexander

    2012-08-01

    The first 3D visualisation of a heteromorph cephalopod species from the Southern Alps (Dolomites, northern Italy) is presented. Computed tomography, palaeontological data and 3D reconstructions were included in the production of a movie, which shows a life reconstruction of the extinct organism. This detailed reconstruction is according to the current knowledge of the shape and mode of life as well as habitat of this animal. The results are based on the most complete shell known thus far of the genus Dissimilites. Object-based combined analyses from computed tomography and various computed 3D facility programmes help to understand morphological details as well as their ontogentical changes in fossil material. In this study, an additional goal was to show changes in locomotion during different ontogenetic phases of such fossil, marine shell-bearing animals (ammonoids). Hence, the presented models and tools can serve as starting points for discussions on morphology and locomotion of extinct cephalopods in general, and of the genus Dissimilites in particular. The heteromorph ammonoid genus Dissimilites is interpreted here as an active swimmer of the Tethyan Ocean. This study portrays non-destructive methods of 3D visualisation applied on palaeontological material, starting with computed tomography resulting in animated, high-quality video clips. The here presented 3D geometrical models and animation, which are based on palaeontological material, demonstrate the wide range of applications, analytical techniques and also outline possible limitations of 3D models in earth sciences and palaeontology. The realistic 3D models and motion pictures can easily be shared amongst palaeontologists. Data, images and short clips can be discussed online and, if necessary, adapted in morphological details and motion-style to better represent the cephalopod animal.

  17. Computer vision in the poultry industry

    USDA-ARS?s Scientific Manuscript database

    Computer vision is becoming increasingly important in the poultry industry due to increasing use and speed of automation in processing operations. Growing awareness of food safety concerns has helped add food safety inspection to the list of tasks that automated computer vision can assist. Researc...

  18. Intelligent robots and computer vision

    SciTech Connect

    Casasent, D.P.

    1986-01-01

    This book presents the papers given at a conference on artificial intelligence and robot vision. Topics considered at the conference included pattern recognition, image processing for intelligent robotics, three-dimensional vision (depth and motion), vision modeling and shape estimation, spatial reasoning, the symbolic processing visual information, robotic sensors and applications, intelligent control architectures for robot systems, robot languages and programming, human-machine interfaces, robotics applications, and architectures of robotics.

  19. Potential hazards of viewing 3-D stereoscopic television, cinema and computer games: a review.

    PubMed

    Howarth, Peter A

    2011-03-01

    The visual stimulus provided by a 3-D stereoscopic display differs from that of the real world because the image provided to each eye is produced on a flat surface. The distance from the screen to the eye remains fixed, providing a single focal distance, but the introduction of disparity between the images allows objects to be located geometrically in front of, or behind, the screen. Unlike in the real world, the stimulus to accommodation and the stimulus to convergence do not match. Although this mismatch is used positively in some forms of Orthoptic treatment, a number of authors have suggested that it could negatively lead to the development of asthenopic symptoms. From knowledge of the zone of clear, comfortable, single binocular vision one can predict that, for people with normal binocular vision, adverse symptoms will not be present if the discrepancy is small, but are likely if it is large, and that what constitutes 'large' and 'small' are idiosyncratic to the individual. The accommodation-convergence mismatch is not, however, the only difference between the natural and the artificial stimuli. In the former case, an object located in front of, or behind, a fixated object will not only be perceived as double if the images fall outside Panum's fusional areas, but it will also be defocused and blurred. In the latter case, however, it is usual for the producers of cinema, TV or computer game content to provide an image that is in focus over the whole of the display, and as a consequence diplopic images will be sharply in focus. The size of Panum's fusional area is spatial frequency-dependent, and because of this the high spatial frequencies present in the diplopic 3-D image will provide a different stimulus to the fusion system from that found naturally. © 2011 The College of Optometrists.

  20. Implementation Of True 3D Cursors In Computer Graphics

    NASA Astrophysics Data System (ADS)

    Butts, David R.; McAllister, David F.

    1988-06-01

    The advances in stereoscopic image display techniques have shown an increased need for real-time interaction with the three-dimensional image. We have developed a prototype real-time stereoscopic cursor to investigate this interaction. The results have pointed out areas where hardware speeds are a limiting factor, as well as areas where various methodologies cause perceptual difficulties. This paper addresses the psychological and perceptual anomalies involved in stereo image techniques, cursor generation and motion, and the use of the device as a 3D drawing and depth measuring tool.

  1. NewVision: a program for interactive navigation and analysis of multiple 3-D data sets using coordinated virtual cameras.

    PubMed

    Pixton, J L; Belmont, A S

    1996-01-01

    We describe "NewVision", a program designed for rapid interactive display, sectioning, and comparison of multiple large three-dimensional (3-D) reconstructions. User tools for navigating within large 3-D data sets and selecting local subvolumes for display, combined with view caching, fast integer interpolation, and background tasking, provide highly interactive viewing of arbitrarily sized data sets on Silicon Graphics systems ranging from simple workstations to supercomputers. Multiple windows, each showing different views of the same 3-D data set, are coordinated through mapping of local coordinate systems to a single global world coordinate system. Mapping to a world coordinate system allows quantitative measurements from any open window as well as creation of linked windows in which operations such as panning, zooming, and 3-D rotations of the viewing perspective in any one window are mirrored by corresponding transformations in the views shown in other linked windows. The specific example of tracing 3-D fiber trajectories is used to demonstrate the potential of the linked window concept. A global overview of NewVision's design and organization is provided, and future development directions are briefly discussed.

  2. Research progress of depth detection in vision measurement: a novel project of bifocal imaging system for 3D measurement

    NASA Astrophysics Data System (ADS)

    Li, Anhu; Ding, Ye; Wang, Wei; Zhu, Yongjian; Li, Zhizhong

    2013-09-01

    The paper reviews the recent research progresses of vision measurement. The general methods of the depth detection used in the monocular stereo vision are compared with each other. As a result, a novel bifocal imaging measurement system based on the zoom method is proposed to solve the problem of the online 3D measurement. This system consists of a primary lens and a secondary one with the different focal length matching to meet the large-range and high-resolution imaging requirements without time delay and imaging errors, which has an important significance for the industry application.

  3. Generating 3D anatomically detailed models of the retina from OCT data sets: implications for computational modelling

    NASA Astrophysics Data System (ADS)

    Shalbaf, Farzaneh; Dokos, Socrates; Lovell, Nigel H.; Turuwhenua, Jason; Vaghefi, Ehsan

    2015-12-01

    Retinal prosthesis has been proposed to restore vision for those suffering from the retinal pathologies that mainly affect the photoreceptors layer but keep the inner retina intact. Prior to costly risky experimental studies computational modelling of the retina will help to optimize the device parameters and enhance the outcomes. Here, we developed an anatomically detailed computational model of the retina based on OCT data sets. The consecutive OCT images of individual were subsequently segmented to provide a 3D representation of retina in the form of finite elements. Thereafter, the electrical properties of the retina were modelled by implementing partial differential equation on the 3D mesh. Different electrode configurations, that is bipolar and hexapolar configurations, were implemented and the results were compared with the previous computational and experimental studies. Furthermore, the possible effects of the curvature of retinal layers on the current steering through the retina were proposed and linked to the clinical observations.

  4. Biological Basis For Computer Vision: Some Perspectives

    NASA Astrophysics Data System (ADS)

    Gupta, Madan M.

    1990-03-01

    Using biology as a basis for the development of sensors, devices and computer vision systems is a challenge to systems and vision scientists. It is also a field of promising research for engineering applications. Biological sensory systems, such as vision, touch and hearing, sense different physical phenomena from our environment, yet they possess some common mathematical functions. These mathematical functions are cast into the neural layers which are distributed throughout our sensory regions, sensory information transmission channels and in the cortex, the centre of perception. In this paper, we are concerned with the study of the biological vision system and the emulation of some of its mathematical functions, both retinal and visual cortex, for the development of a robust computer vision system. This field of research is not only intriguing, but offers a great challenge to systems scientists in the development of functional algorithms. These functional algorithms can be generalized for further studies in such fields as signal processing, control systems and image processing. Our studies are heavily dependent on the the use of fuzzy - neural layers and generalized receptive fields. Building blocks of such neural layers and receptive fields may lead to the design of better sensors and better computer vision systems. It is hoped that these studies will lead to the development of better artificial vision systems with various applications to vision prosthesis for the blind, robotic vision, medical imaging, medical sensors, industrial automation, remote sensing, space stations and ocean exploration.

  5. Computer vision based room interior design

    NASA Astrophysics Data System (ADS)

    Ahmad, Nasir; Hussain, Saddam; Ahmad, Kashif; Conci, Nicola

    2015-12-01

    This paper introduces a new application of computer vision. To the best of the author's knowledge, it is the first attempt to incorporate computer vision techniques into room interior designing. The computer vision based interior designing is achieved in two steps: object identification and color assignment. The image segmentation approach is used for the identification of the objects in the room and different color schemes are used for color assignment to these objects. The proposed approach is applied to simple as well as complex images from online sources. The proposed approach not only accelerated the process of interior designing but also made it very efficient by giving multiple alternatives.

  6. Computer-assisted three-dimensional surgical planning and simulation: 3D virtual osteotomy.

    PubMed

    Xia, J; Ip, H H; Samman, N; Wang, D; Kot, C S; Yeung, R W; Tideman, H

    2000-02-01

    A computer-assisted three-dimensional virtual osteotomy system for orthognathic surgery (CAVOS) is presented. The virtual reality workbench is used for surgical planning. The surgeon immerses in a virtual reality environment with stereo eyewear, holds a virtual "scalpel" (3D Mouse) and operates on a "real" patient (3D visualization) to obtain pre-surgical prediction (3D bony segment movements). Virtual surgery on a computer-generated 3D head model is simulated and can be visualized from any arbitrary viewing point in a personal computer system.

  7. The 3d International Workshop on Computational Electronics

    NASA Astrophysics Data System (ADS)

    Goodnick, Stephen M.

    1994-09-01

    The Third International Workshop on Computational Electronics (IWCE) was held at the Benson Hotel in downtown Portland, Oregon, on May 18, 19, and 20, 1994. The workshop was devoted to a broad range of topics in computational electronics related to the simulation of electronic transport in semiconductors and semiconductor devices, particularly those which use large computational resources. The workshop was supported by the National Science Foundation (NSF), the Office of Naval Research and the Army Research Office, as well as local support from the Oregon Joint Graduate Schools of Engineering and the Oregon Center for Advanced Technology Education. There were over 100 participants in the Portland workshop, of which more than one quarter represented research groups outside of the United States from Austria, Canada, France, Germany, Italy, Japan, Switzerland, and the United Kingdom. There were a total 81 papers presented at the workshop, 9 invited talks, 26 oral presentations and 46 poster presentations. The emphasis of the contributions reflected the interdisciplinary nature of computational electronics with researchers from the Chemistry, Computer Science, Mathematics, Engineering, and Physics communities participating in the workshop.

  8. Computational Vision: A Critical Review

    DTIC Science & Technology

    1989-10-01

    Intelligence Laboratory, Massachusetts Institute of Technology, 1988. [58] A. L. Hodgkin and A. F. Huxley . A quantitative description of membrane current and...respectively, and in section 2.4 we review some of the aspects of neuronal modeling in the context of vision. 2.1 Low-level vision One possible definition of...many potential target matches, all but one of which are false. Two constraints on matching were proposed to solve the false targets problem [95]. The

  9. An Automatic 3d Reconstruction Method Based on Multi-View Stereo Vision for the Mogao Grottoes

    NASA Astrophysics Data System (ADS)

    Xiong, J.; Zhong, S.; Zheng, L.

    2015-05-01

    This paper presents an automatic three-dimensional reconstruction method based on multi-view stereo vision for the Mogao Grottoes. 3D digitization technique has been used in cultural heritage conservation and replication over the past decade, especially the methods based on binocular stereo vision. However, mismatched points are inevitable in traditional binocular stereo matching due to repeatable or similar features of binocular images. In order to reduce the probability of mismatching greatly and improve the measure precision, a portable four-camera photographic measurement system is used for 3D modelling of a scene. Four cameras of the measurement system form six binocular systems with baselines of different lengths to add extra matching constraints and offer multiple measurements. Matching error based on epipolar constraint is introduced to remove the mismatched points. Finally, an accurate point cloud can be generated by multi-images matching and sub-pixel interpolation. Delaunay triangulation and texture mapping are performed to obtain the 3D model of a scene. The method has been tested on 3D reconstruction several scenes of the Mogao Grottoes and good results verify the effectiveness of the method.

  10. Tools for 3D scientific visualization in computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Bancroft, Gordon; Plessel, Todd; Merritt, Fergus; Watson, Val

    1989-01-01

    The purpose is to describe the tools and techniques in use at the NASA Ames Research Center for performing visualization of computational aerodynamics, for example visualization of flow fields from computer simulations of fluid dynamics about vehicles such as the Space Shuttle. The hardware used for visualization is a high-performance graphics workstation connected to a super computer with a high speed channel. At present, the workstation is a Silicon Graphics IRIS 3130, the supercomputer is a CRAY2, and the high speed channel is a hyperchannel. The three techniques used for visualization are post-processing, tracking, and steering. Post-processing analysis is done after the simulation. Tracking analysis is done during a simulation but is not interactive, whereas steering analysis involves modifying the simulation interactively during the simulation. Using post-processing methods, a flow simulation is executed on a supercomputer and, after the simulation is complete, the results of the simulation are processed for viewing. The software in use and under development at NASA Ames Research Center for performing these types of tasks in computational aerodynamics is described. Workstation performance issues, benchmarking, and high-performance networks for this purpose are also discussed as well as descriptions of other hardware for digital video and film recording.

  11. Computational 3-D Model of the Human Respiratory System

    EPA Science Inventory

    We are developing a comprehensive, morphologically-realistic computational model of the human respiratory system that can be used to study the inhalation, deposition, and clearance of contaminants, while being adaptable for age, race, gender, and health/disease status. The model ...

  12. Modeling Computer Communication Networks in a Realistic 3D Environment

    DTIC Science & Technology

    2010-03-01

    5 2.1.3 Animated Visualization . . . . . . . . . . . . . . 6 2.1.4 Visualization and Computer Networks . . . . . 7 2.2 Current...32 3.1.4 Network Connectivity . . . . . . . . . . . . . . . 33 vi Page 3.1.5 Network Traffic Animation ...55 4.1.5 Network Traffic Animation . . . . . . . . . . . . 55 4.1.6 Scene Control . . . . . . . . . . . . . . . . . . . 58 4.1.7

  13. Computational 3-D Model of the Human Respiratory System

    EPA Science Inventory

    We are developing a comprehensive, morphologically-realistic computational model of the human respiratory system that can be used to study the inhalation, deposition, and clearance of contaminants, while being adaptable for age, race, gender, and health/disease status. The model ...

  14. Non-Fourier Computer Generated Holography for 3-D Display

    DTIC Science & Technology

    1989-11-01

    Captain Sean Kelly, AFSC WRDC/KTD, for his contribution of technical information. Last, I wish to thank my wife Debbie, and my children Karen and Benjamin...I I I 44 i I+ I- Fiue1.WvlntIn mltd I4 Bibliography 1. Barakat, R., et al. The Computer in Optical Reasearch : Methods and Applica- tions. Berlin

  15. Computational ocean acoustics: Advances in 3D ocean acoustic modeling

    NASA Astrophysics Data System (ADS)

    Schmidt, Henrik; Jensen, Finn B.

    2012-11-01

    The numerical model of ocean acoustic propagation developed in the 1980's are still in widespread use today, and the field of computational ocean acoustics is often considered a mature field. However, the explosive increase in computational power available to the community has created opportunities for modeling phenomena that earlier were beyond reach. Most notably, three-dimensional propagation and scattering problems have been prohibitive computationally, but are now addressed routinely using brute force numerical approaches such as the Finite Element Method, in particular for target scattering problems, where they are being combined with the traditional wave theory propagation models in hybrid modeling frameworks. Also, recent years has seen the development of hybrid approaches coupling oceanographic circulation models with acoustic propagation models, enabling the forecasting of sonar performance uncertainty in dynamic ocean environments. These and other advances made over the last couple of decades support the notion that the field of computational ocean acoustics is far from being mature. [Work supported by the Office of Naval Research, Code 321OA].

  16. Computation of tooth axes of existent and missing teeth from 3D CT images.

    PubMed

    Wang, Yang; Wu, Lin; Guo, Huayan; Qiu, Tiantian; Huang, Yuanliang; Lin, Bin; Wang, Lisheng

    2015-12-01

    Orientations of tooth axes are important quantitative information used in dental diagnosis and surgery planning. However, their computation is a complex problem, and the existing methods have respective limitations. This paper proposes new methods to compute 3D tooth axes from 3D CT images for existent teeth with single root or multiple roots and to estimate 3D tooth axes from 3D CT images for missing teeth. The tooth axis of a single-root tooth will be determined by segmenting the pulp cavity of the tooth and computing the principal direction of the pulp cavity, and the estimation of tooth axes of the missing teeth is modeled as an interpolation problem of some quaternions along a 3D curve. The proposed methods can either avoid the difficult teeth segmentation problem or improve the limitations of existing methods. Their effectiveness and practicality are demonstrated by experimental results of different 3D CT images from the clinic.

  17. Tensor3D: A computer graphics program to simulate 3D real-time deformation and visualization of geometric bodies

    NASA Astrophysics Data System (ADS)

    Pallozzi Lavorante, Luca; Dirk Ebert, Hans

    2008-07-01

    Tensor3D is a geometric modeling program with the capacity to simulate and visualize in real-time the deformation, specified through a tensor matrix and applied to triangulated models representing geological bodies. 3D visualization allows the study of deformational processes that are traditionally conducted in 2D, such as simple and pure shears. Besides geometric objects that are immediately available in the program window, the program can read other models from disk, thus being able to import objects created with different open-source or proprietary programs. A strain ellipsoid and a bounding box are simultaneously shown and instantly deformed with the main object. The principal axes of strain are visualized as well to provide graphical information about the orientation of the tensor's normal components. The deformed models can also be saved, retrieved later and deformed again, in order to study different steps of progressive strain, or to make this data available to other programs. The shape of stress ellipsoids and the corresponding Mohr circles defined by any stress tensor can also be represented. The application was written using the Visualization ToolKit, a powerful scientific visualization library in the public domain. This development choice, allied to the use of the Tcl/Tk programming language, which is independent on the host computational platform, makes the program a useful tool for the study of geometric deformations directly in three dimensions in teaching as well as research activities.

  18. New computer-controlled color vision test

    NASA Astrophysics Data System (ADS)

    Ladunga, Karoly; Wenzel, Klara; Abraham, Gyorgy

    1999-12-01

    A computer controlled color discrimination test is described which enables rapid testing using selected colors from the color space of normal CRT monitors. We have investigated whether difference sin color discrimination between groups of normal and color deficient observers could be detected using a computer-controlled test of color vision. The test accurately identified the differences between the normal and color deficient groups. New color discrimination test have been developed to more efficiently evaluate color vision.

  19. Influence of stereopsis and abnormal binocular vision on ocular and systemic discomfort while watching 3D television.

    PubMed

    Kim, S-H; Suh, Y-W; Yun, C; Yoo, E-J; Yeom, J-H; Cho, Y A

    2013-11-01

    To evaluate the degree of three-dimensional (3D) perception and ocular and systemic discomfort in patients with abnormal binocular vision (ABV), and their relationship to stereoacuity while watching a 3D television (TV). Patients with strabismus, amblyopia, or anisometropia older than 9 years were recruited for the ABV group (98 subjects). Normal volunteers were enrolled in the control group (32 subjects). Best-corrected visual acuity, refractive errors, angle of strabismus, and stereoacuity were measured. After watching 3D TV for 20 min, a survey was conducted to evaluate the degree of 3D perception, and ocular and systemic discomfort while watching 3D TV. One hundred and thirty subjects were enrolled in this study. The ABV group included 49 patients with strabismus, 22 with amblyopia, and 27 with anisometropia. The ABV group showed worse stereoacuity at near and distant fixation (P<0.001). Ocular and systemic discomfort was, however, not different between the two groups. Fifty-three subjects in the ABV group and all subjects in the control group showed good stereopsis (60 s of arc or better at near), and they reported more dizziness, headache, eye fatigue, and pain (P<0.05) than the other 45 subjects with decreased stereopsis. The subjects with good stereopsis in the ABV group felt more eye fatigue than those in the control group (P=0.031). The subjects with decreased stereopsis showed more difficulty with 3D perception (P<0.001). The subjects with abnormal stereopsis showed decreased 3D perception while watching 3D TV. However, ocular and systemic discomfort was more closely related to better stereopsis.

  20. Influence of stereopsis and abnormal binocular vision on ocular and systemic discomfort while watching 3D television

    PubMed Central

    Kim, S-H; Suh, Y-W; Yun, C; Yoo, E-J; Yeom, J-H; Cho, Y A

    2013-01-01

    Purpose To evaluate the degree of three-dimensional (3D) perception and ocular and systemic discomfort in patients with abnormal binocular vision (ABV), and their relationship to stereoacuity while watching a 3D television (TV). Methods Patients with strabismus, amblyopia, or anisometropia older than 9 years were recruited for the ABV group (98 subjects). Normal volunteers were enrolled in the control group (32 subjects). Best-corrected visual acuity, refractive errors, angle of strabismus, and stereoacuity were measured. After watching 3D TV for 20 min, a survey was conducted to evaluate the degree of 3D perception, and ocular and systemic discomfort while watching 3D TV. Results One hundred and thirty subjects were enrolled in this study. The ABV group included 49 patients with strabismus, 22 with amblyopia, and 27 with anisometropia. The ABV group showed worse stereoacuity at near and distant fixation (P<0.001). Ocular and systemic discomfort was, however, not different between the two groups. Fifty-three subjects in the ABV group and all subjects in the control group showed good stereopsis (60 s of arc or better at near), and they reported more dizziness, headache, eye fatigue, and pain (P<0.05) than the other 45 subjects with decreased stereopsis. The subjects with good stereopsis in the ABV group felt more eye fatigue than those in the control group (P=0.031). The subjects with decreased stereopsis showed more difficulty with 3D perception (P<0.001). Conclusions The subjects with abnormal stereopsis showed decreased 3D perception while watching 3D TV. However, ocular and systemic discomfort was more closely related to better stereopsis. PMID:23928879

  1. Vision-based building energy diagnostics and retrofit analysis using 3D thermography and building information modeling

    NASA Astrophysics Data System (ADS)

    Ham, Youngjib

    localization issues of 2D thermal image-based inspection, a new computer vision-based method is presented for automated 3D spatio-thermal modeling of building environments from images and localizing the thermal images into the 3D reconstructed scenes, which helps better characterize the as-is condition of existing buildings in 3D. By using these models, auditors can conduct virtual walk-through in buildings and explore the as-is condition of building geometry and the associated thermal conditions in 3D. Second, to address the challenges in qualitative and subjective interpretation of visual data, a new model-based method is presented to convert the 3D thermal profiles of building environments into their associated energy performance metrics. More specifically, the Energy Performance Augmented Reality (EPAR) models are formed which integrate the actual 3D spatio-thermal models ('as-is') with energy performance benchmarks ('as-designed') in 3D. In the EPAR models, the presence and location of potential energy problems in building environments are inferred based on performance deviations. The as-is thermal resistances of the building assemblies are also calculated at the level of mesh vertex in 3D. Then, based on the historical weather data reflecting energy load for space conditioning, the amount of heat transfer that can be saved by improving the as-is thermal resistances of the defective areas to the recommended level is calculated, and the equivalent energy cost for this saving is estimated. The outcome provides building practitioners with unique information that can facilitate energy efficient retrofit decision-makings. This is a major departure from offhand calculations that are based on historical cost data of industry best practices. Finally, to improve the reliability of BIM-based energy performance modeling and analysis for existing buildings, a new model-based automated method is presented to map actual thermal resistance measurements at the level of 3D vertexes to the

  2. Building a 3D Computed Tomography Scanner From Surplus Parts.

    PubMed

    Haidekker, Mark A

    2014-01-01

    Computed tomography (CT) scanners are expensive imaging devices, often out of reach for small research groups. Designing and building a CT scanner from modular components is possible, and this article demonstrates that realization of a CT scanner from components is surprisingly easy. However, the high costs of a modular X-ray source and detector limit the overall cost savings. In this article, the possibility of building a CT scanner with available surplus X-ray parts is discussed, and a practical device is described that incurred costs of less than $16,000. The image quality of this device is comparable with commercial devices. The disadvantage is that design constraints imposed by the available components lead to slow scan speeds and a resolution of 0.5 mm. Despite these limitations, a device such as this is attractive for imaging studies in the biological and biomedical sciences, as well as for advancing CT technology itself.

  3. GEO3D - Three-Dimensional Computer Model of a Ground Source Heat Pump System

    SciTech Connect

    James Menart

    2013-06-07

    This file is the setup file for the computer program GEO3D. GEO3D is a computer program written by Jim Menart to simulate vertical wells in conjunction with a heat pump for ground source heat pump (GSHP) systems. This is a very detailed three-dimensional computer model. This program produces detailed heat transfer and temperature field information for a vertical GSHP system.

  4. Target detect system in 3D using vision apply on plant reproduction by tissue culture

    NASA Astrophysics Data System (ADS)

    Vazquez Rueda, Martin G.; Hahn, Federico

    2001-03-01

    This paper presents the preliminary results for a system in tree dimension that use a system vision to manipulate plants in a tissue culture process. The system is able to estimate the position of the plant in the work area, first calculate the position and send information to the mechanical system, and recalculate the position again, and if it is necessary, repositioning the mechanical system, using an neural system to improve the location of the plant. The system use only the system vision to sense the position and control loop using a neural system to detect the target and positioning the mechanical system, the results are compared with an open loop system.

  5. Recent advances in 3D computed tomography techniques for simulation and navigation in hepatobiliary pancreatic surgery.

    PubMed

    Uchida, Masafumi

    2014-04-01

    A few years ago it could take several hours to complete a 3D image using a 3D workstation. Thanks to advances in computer science, obtaining results of interest now requires only a few minutes. Many recent 3D workstations or multimedia computers are equipped with onboard 3D virtual patient modeling software, which enables patient-specific preoperative assessment and virtual planning, navigation, and tool positioning. Although medical 3D imaging can now be conducted using various modalities, including computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), and ultrasonography (US) among others, the highest quality images are obtained using CT data, and CT images are now the most commonly used source of data for 3D simulation and navigation image. If the 2D source image is bad, no amount of 3D image manipulation in software will provide a quality 3D image. In this exhibition, the recent advances in CT imaging technique and 3D visualization of the hepatobiliary and pancreatic abnormalities are featured, including scan and image reconstruction technique, contrast-enhanced techniques, new application of advanced CT scan techniques, and new virtual reality simulation and navigation imaging.

  6. Computational models of human vision with applications

    NASA Technical Reports Server (NTRS)

    Wandell, B. A.

    1985-01-01

    Perceptual problems in aeronautics were studied. The mechanism by which color constancy is achieved in human vision was examined. A computable algorithm was developed to model the arrangement of retinal cones in spatial vision. The spatial frequency spectra are similar to the spectra of actual cone mosaics. The Hartley transform as a tool of image processing was evaluated and it is suggested that it could be used in signal processing applications, GR image processing.

  7. Seeing in 3-d with just one eye: stereopsis without binocular vision.

    PubMed

    Vishwanath, Dhanraj; Hibbard, Paul B

    2013-09-01

    Humans can perceive depth when viewing with one eye, and even when viewing a two-dimensional picture of a three-dimensional scene. However, viewing a real scene with both eyes produces a more compelling three-dimensional experience of immersive space and tangible solid objects. A widely held belief is that this qualitative visual phenomenon (stereopsis) is a by-product of binocular vision. In the research reported here, we empirically established, for the first time, the qualitative characteristics associated with stereopsis to show that they can occur for static two-dimensional pictures without binocular vision. Critically, we show that stereopsis is a measurable qualitative attribute and that its induction while viewing pictures is not consistent with standard explanations based on depth-cue conflict or the perception of greater depth magnitude. These results challenge the conventional understanding of the underlying cause, variation, and functional role of stereopsis.

  8. Enhanced computer vision with Microsoft Kinect sensor: a review.

    PubMed

    Han, Jungong; Shao, Ling; Xu, Dong; Shotton, Jamie

    2013-10-01

    With the invention of the low-cost Microsoft Kinect sensor, high-resolution depth and visual (RGB) sensing has become available for widespread use. The complementary nature of the depth and visual information provided by the Kinect sensor opens up new opportunities to solve fundamental problems in computer vision. This paper presents a comprehensive review of recent Kinect-based computer vision algorithms and applications. The reviewed approaches are classified according to the type of vision problems that can be addressed or enhanced by means of the Kinect sensor. The covered topics include preprocessing, object tracking and recognition, human activity analysis, hand gesture analysis, and indoor 3-D mapping. For each category of methods, we outline their main algorithmic contributions and summarize their advantages/differences compared to their RGB counterparts. Finally, we give an overview of the challenges in this field and future research trends. This paper is expected to serve as a tutorial and source of references for Kinect-based computer vision researchers.

  9. The spine in 3D. Computed tomographic reformation from 2D axial sections.

    PubMed

    Virapongse, C; Gmitro, A; Sarwar, M

    1986-01-01

    A new program (3D83, General Electric) was used to reformat three-dimensional (3D) images from two-dimensional (2D) computed tomographic axial scans in 18 patients who had routine scans of the spine. The 3D spine images were extremely true to life and could be rotated around all three principle axes (constituting a movie), so that an illusion of head-motion parallax was created. The benefit of 3D reformation with this program is primarily for preoperative planning. It appears that 3D can also effectively determine the patency of foraminal stenosis by reformatting in hemisections. Currently this program is subject to several drawbacks that require user interaction and long reconstruction time. With further improvement, 3D reformation will find increasing clinical applicability.

  10. A Skeleton-Based 3D Shape Reconstruction of Free-Form Objects with Stereo Vision

    NASA Astrophysics Data System (ADS)

    Saini, Deepika; Kumar, Sanjeev

    2015-12-01

    In this paper, an efficient approach is proposed for recovering the 3D shape of a free-form object from its arbitrary pair of stereo images. In particular, the reconstruction problem is treated as the reconstruction of the skeleton and the external boundary of the object. The reconstructed skeleton is termed as the line-like representation or curve-skeleton of the 3D object. The proposed solution for object reconstruction is based on this evolved curve-skeleton. It is used as a seed for recovering shape of the 3D object, and the extracted boundary is used for terminating the growing process of the object. NURBS-skeleton is used to extract the skeleton of both views. Affine invariant property of the convex hulls is used to establish the correspondence between the skeletons and boundaries in the stereo images. In the growing process, a distance field is defined for each skeleton point as the smallest distance from that point to the boundary of the object. A sphere centered at a skeleton point of radius equal to the minimum distance to the boundary is tangential to the boundary. Filling in the spheres centered at each skeleton point reconstructs the object. Several results are presented in order to check the applicability and validity of the proposed algorithm.

  11. Computer-assisted three-dimensional surgical planning and simulation: 3D color facial model generation.

    PubMed

    Xia, J; Wang, D; Samman, N; Yeung, R W; Tideman, H

    2000-02-01

    A scheme for texture mapping a 3D individualized color photo-realistic facial model from real color portraits and CT data is described. First, 3D CT images including both soft and hard tissues should be reconstructed from sequential CT slices, using a surface rendering technique. Facial features are extracted from 3D soft tissue. A generic mesh is individualized by correspondence matching and interpolation from those feature vertices. Three digitized color portraits with the "third" dimension from reconstructed soft tissue are blended and texture-mapped onto the 3D head model (mesh). A color simulated human head generated from frontal, right and left real color portraits can be viewed from an arbitrary angle in an inexpensive and user-friendly conventional personal computer. This scheme is the basic procedure in 3D computer-assisted simulation surgery.

  12. A low cost computer aided design (CAD) system for 3D-reconstruction from serial sections.

    PubMed

    Keri, C; Ahnelt, P K

    1991-05-01

    This paper describes an approach to computer-assisted 3D-reconstruction of neuronal specimens based on a low cost yet powerful software package for a personal computer (Atari ST). It provides an easy to handle (mouse driven) object editor to create 3D models of medium complexity (15,000 vertices) from sections or from scratch. The models may be displayed in various modes including stereo viewing and complex animation sequences.

  13. Remote sensing of vegetation structure using computer vision

    NASA Astrophysics Data System (ADS)

    Dandois, Jonathan P.

    High-spatial resolution measurements of vegetation structure are needed for improving understanding of ecosystem carbon, water and nutrient dynamics, the response of ecosystems to a changing climate, and for biodiversity mapping and conservation, among many research areas. Our ability to make such measurements has been greatly enhanced by continuing developments in remote sensing technology---allowing researchers the ability to measure numerous forest traits at varying spatial and temporal scales and over large spatial extents with minimal to no field work, which is costly for large spatial areas or logistically difficult in some locations. Despite these advances, there remain several research challenges related to the methods by which three-dimensional (3D) and spectral datasets are joined (remote sensing fusion) and the availability and portability of systems for frequent data collections at small scale sampling locations. Recent advances in the areas of computer vision structure from motion (SFM) and consumer unmanned aerial systems (UAS) offer the potential to address these challenges by enabling repeatable measurements of vegetation structural and spectral traits at the scale of individual trees. However, the potential advances offered by computer vision remote sensing also present unique challenges and questions that need to be addressed before this approach can be used to improve understanding of forest ecosystems. For computer vision remote sensing to be a valuable tool for studying forests, bounding information about the characteristics of the data produced by the system will help researchers understand and interpret results in the context of the forest being studied and of other remote sensing techniques. This research advances understanding of how forest canopy and tree 3D structure and color are accurately measured by a relatively low-cost and portable computer vision personal remote sensing system: 'Ecosynth'. Recommendations are made for optimal

  14. Three-dimensional human computer interaction based on 3D widgets for medical data visualization

    NASA Astrophysics Data System (ADS)

    Xue, Jian; Tian, Jie; Zhao, Mingchang

    2005-04-01

    Three-dimensional human computer interaction plays an important role in 3-dimensional visualization. It is important for clinicians to accurately use and easily handle the result of medical data visualization in order to assist diagnosis and surgery simulation. A 3D human computer interaction software platform based on 3D widgets has been designed in traditional object-oriented fashion with some common design patterns and implemented by using ANSI C++, including all function modules and some practical widgets. A group of application examples are exhibited as well. The ultimate objective is to provide a flexible, reliable and extensible 3-D interaction platform for medical image processing and analyzing.

  15. Development of a system based on 3D vision, interactive virtual environments, ergonometric signals and a humanoid for stroke rehabilitation.

    PubMed

    Ibarra Zannatha, Juan Manuel; Tamayo, Alejandro Justo Malo; Sánchez, Angel David Gómez; Delgado, Jorge Enrique Lavín; Cheu, Luis Eduardo Rodríguez; Arévalo, Wilson Alexander Sierra

    2013-11-01

    This paper presents a stroke rehabilitation (SR) system for the upper limbs, developed as an interactive virtual environment (IVE) based on a commercial 3D vision system (a Microsoft Kinect), a humanoid robot (an Aldebaran's Nao), and devices producing ergonometric signals. In one environment, the rehabilitation routines, developed by specialists, are presented to the patient simultaneously by the humanoid and an avatar inside the IVE. The patient follows the rehabilitation task, while his avatar copies his gestures that are captured by the Kinect 3D vision system. The information of the patient movements, together with the signals obtained from the ergonometric measurement devices, is used also to supervise and to evaluate the rehabilitation progress. The IVE can also present an RGB image of the patient. In another environment, that uses the same base elements, four game routines--Touch the balls 1 and 2, Simon says, and Follow the point--are used for rehabilitation. These environments are designed to create a positive influence in the rehabilitation process, reduce costs, and engage the patient.

  16. Computational identification and quantification of trabecular microarchitecture classes by 3-D texture analysis-based clustering.

    PubMed

    Valentinitsch, Alexander; Patsch, Janina M; Burghardt, Andrew J; Link, Thomas M; Majumdar, Sharmila; Fischer, Lukas; Schueller-Weidekamm, Claudia; Resch, Heinrich; Kainberger, Franz; Langs, Georg

    2013-05-01

    High resolution peripheral quantitative computed tomography (HR-pQCT) permits the non-invasive assessment of cortical and trabecular bone density, geometry, and microarchitecture. Although researchers have developed various post-processing algorithms to quantify HR-pQCT image properties, few of these techniques capture image features beyond global structure-based metrics. While 3D-texture analysis is a key approach in computer vision, it has been utilized only infrequently in HR-pQCT research. Motivated by high isotropic spatial resolution and the information density provided by HR-pQCT scans, we have developed and evaluated a post-processing algorithm that quantifies microarchitecture characteristics via texture features in HR-pQCT scans. During a training phase in which clustering was applied to texture features extracted from each voxel of trabecular bone, three distinct clusters, or trabecular microarchitecture classes (TMACs) were identified. These TMACs represent trabecular bone regions with common texture characteristics. The TMACs were then used to automatically segment the voxels of new data into three regions corresponding to the trained cluster features. Regional trabecular bone texture was described by the histogram of relative trabecular bone volume covered by each cluster. We evaluated the intra-scanner and inter-scanner reproducibility by assessing the precision errors (PE), intra class correlation coefficients (ICC) and Dice coefficients (DC) of the method on 14 ultradistal radius samples scanned on two HR-pQCT systems. DC showed good reproducibility in intra-scanner set-up with a mean of 0.870±0.027 (no unit). Even in the inter-scanner set-up the ICC showed high reproducibility, ranging from 0.814 to 0.964. In a preliminary clinical test application, the TMAC histograms appear to be a good indicator, when differentiating between postmenopausal women with (n=18) and without (n=18) prevalent fragility fractures. In conclusion, we could demonstrate

  17. Electro-holography display using computer generated hologram of 3D objects based on projection spectra

    NASA Astrophysics Data System (ADS)

    Huang, Sujuan; Wang, Duocheng; He, Chao

    2012-11-01

    A new method of synthesizing computer-generated hologram of three-dimensional (3D) objects is proposed from their projection images. A series of projection images of 3D objects are recorded with one-dimensional azimuth scanning. According to the principles of paraboloid of revolution in 3D Fourier space and 3D central slice theorem, spectra information of 3D objects can be gathered from their projection images. Considering quantization error of horizontal and vertical directions, the spectrum information from each projection image is efficiently extracted in double circle and four circles shape, to enhance the utilization of projection spectra. Then spectra information of 3D objects from all projection images is encoded into computer-generated hologram based on Fourier transform using conjugate-symmetric extension. The hologram includes 3D information of objects. Experimental results for numerical reconstruction of the CGH at different distance validate the proposed methods and show its good performance. Electro-holographic reconstruction can be realized by using an electronic addressing reflective liquid-crystal display (LCD) spatial light modulator. The CGH from the computer is loaded onto the LCD. By illuminating a reference light from a laser source to the LCD, the amplitude and phase information included in the CGH will be reconstructed due to the diffraction of the light modulated by the LCD.

  18. Computer vision syndrome (CVS) - Thermographic Analysis

    NASA Astrophysics Data System (ADS)

    Llamosa-Rincón, L. E.; Jaime-Díaz, J. M.; Ruiz-Cardona, D. F.

    2017-01-01

    The use of computers has reported an exponential growth in the last decades, the possibility of carrying out several tasks for both professional and leisure purposes has contributed to the great acceptance by the users. The consequences and impact of uninterrupted tasks with computers screens or displays on the visual health, have grabbed researcher’s attention. When spending long periods of time in front of a computer screen, human eyes are subjected to great efforts, which in turn triggers a set of symptoms known as Computer Vision Syndrome (CVS). Most common of them are: blurred vision, visual fatigue and Dry Eye Syndrome (DES) due to unappropriate lubrication of ocular surface when blinking decreases. An experimental protocol was de-signed and implemented to perform thermographic studies on healthy human eyes during exposure to dis-plays of computers, with the main purpose of comparing the existing differences in temperature variations of healthy ocular surfaces.

  19. A Novel Identification Methodology for the Coordinate Relationship between a 3D Vision System and a Legged Robot.

    PubMed

    Chai, Xun; Gao, Feng; Pan, Yang; Qi, Chenkun; Xu, Yilin

    2015-04-22

    Coordinate identification between vision systems and robots is quite a challenging issue in the field of intelligent robotic applications, involving steps such as perceiving the immediate environment, building the terrain map and planning the locomotion automatically. It is now well established that current identification methods have non-negligible limitations such as a difficult feature matching, the requirement of external tools and the intervention of multiple people. In this paper, we propose a novel methodology to identify the geometric parameters of 3D vision systems mounted on robots without involving other people or additional equipment. In particular, our method focuses on legged robots which have complex body structures and excellent locomotion ability compared to their wheeled/tracked counterparts. The parameters can be identified only by moving robots on a relatively flat ground. Concretely, an estimation approach is provided to calculate the ground plane. In addition, the relationship between the robot and the ground is modeled. The parameters are obtained by formulating the identification problem as an optimization problem. The methodology is integrated on a legged robot called "Octopus", which can traverse through rough terrains with high stability after obtaining the identification parameters of its mounted vision system using the proposed method. Diverse experiments in different environments demonstrate our novel method is accurate and robust.

  20. A Novel Identification Methodology for the Coordinate Relationship between a 3D Vision System and a Legged Robot

    PubMed Central

    Chai, Xun; Gao, Feng; Pan, Yang; Qi, Chenkun; Xu, Yilin

    2015-01-01

    Coordinate identification between vision systems and robots is quite a challenging issue in the field of intelligent robotic applications, involving steps such as perceiving the immediate environment, building the terrain map and planning the locomotion automatically. It is now well established that current identification methods have non-negligible limitations such as a difficult feature matching, the requirement of external tools and the intervention of multiple people. In this paper, we propose a novel methodology to identify the geometric parameters of 3D vision systems mounted on robots without involving other people or additional equipment. In particular, our method focuses on legged robots which have complex body structures and excellent locomotion ability compared to their wheeled/tracked counterparts. The parameters can be identified only by moving robots on a relatively flat ground. Concretely, an estimation approach is provided to calculate the ground plane. In addition, the relationship between the robot and the ground is modeled. The parameters are obtained by formulating the identification problem as an optimization problem. The methodology is integrated on a legged robot called “Octopus”, which can traverse through rough terrains with high stability after obtaining the identification parameters of its mounted vision system using the proposed method. Diverse experiments in different environments demonstrate our novel method is accurate and robust. PMID:25912350

  1. Lumber Grading With A Computer Vision System

    Treesearch

    Richard W. Conners; Tai-Hoon Cho; Philip A. Araman

    1989-01-01

    Over the past few years significant progress has been made in developing a computer vision system for locating and identifying defects on surfaced hardwood lumber. Unfortunately, until September of 1988 little research had gone into developing methods for analyzing rough lumber. This task is arguably more complex than the analysis of surfaced lumber. The prime...

  2. Time- and Computation-Efficient Calibration of MEMS 3D Accelerometers and Gyroscopes

    PubMed Central

    Stančin, Sara; Tomažič, Sašo

    2014-01-01

    We propose calibration methods for microelectromechanical system (MEMS) 3D accelerometers and gyroscopes that are efficient in terms of time and computational complexity. The calibration process for both sensors is simple, does not require additional expensive equipment, and can be performed in the field before or between motion measurements. The methods rely on a small number of defined calibration measurements that are used to obtain the values of 12 calibration parameters. This process enables the static compensation of sensor inaccuracies. The values detected by the 3D sensor are interpreted using a generalized 3D sensor model. The model assumes that the values detected by the sensor are equal to the projections of the measured value on the sensor sensitivity axes. Although this finding is trivial for 3D accelerometers, its validity for 3D gyroscopes is not immediately apparent; thus, this paper elaborates on this latter topic. For an example sensor device, calibration parameters were established using calibration measurements of approximately 1.5 min in duration for the 3D accelerometer and 2.5 min in duration for the 3D gyroscope. Correction of each detected 3D value using the established calibration parameters in further measurements requires only nine addition and nine multiplication operations. PMID:25123469

  3. Time- and computation-efficient calibration of MEMS 3D accelerometers and gyroscopes.

    PubMed

    Stančin, Sara; Tomažič, Sašo

    2014-08-13

    We propose calibration methods for microelectromechanical system (MEMS) 3D accelerometers and gyroscopes that are efficient in terms of time and computational complexity. The calibration process for both sensors is simple, does not require additional expensive equipment, and can be performed in the field before or between motion measurements. The methods rely on a small number of defined calibration measurements that are used to obtain the values of 12 calibration parameters. This process enables the static compensation of sensor inaccuracies. The values detected by the 3D sensor are interpreted using a generalized 3D sensor model. The model assumes that the values detected by the sensor are equal to the projections of the measured value on the sensor sensitivity axes. Although this finding is trivial for 3D accelerometers, its validity for 3D gyroscopes is not immediately apparent; thus, this paper elaborates on this latter topic. For an example sensor device, calibration parameters were established using calibration measurements of approximately 1.5 min in duration for the 3D accelerometer and 2.5 min in duration for the 3D gyroscope. Correction of each detected 3D value using the established calibration parameters in further measurements requires only nine addition and nine multiplication operations.

  4. Extended gray level co-occurrence matrix computation for 3D image volume

    NASA Astrophysics Data System (ADS)

    Salih, Nurulazirah M.; Dewi, Dyah Ekashanti Octorina

    2017-02-01

    Gray Level Co-occurrence Matrix (GLCM) is one of the main techniques for texture analysis that has been widely used in many applications. Conventional GLCMs usually focus on two-dimensional (2D) image texture analysis only. However, a three-dimensional (3D) image volume requires specific texture analysis computation. In this paper, an extended 2D to 3D GLCM approach based on the concept of multiple 2D plane positions and pixel orientation directions in the 3D environment is proposed. The algorithm was implemented by breaking down the 3D image volume into 2D slices based on five different plane positions (coordinate axes and oblique axes) resulting in 13 independent directions, then calculating the GLCMs. The resulted GLCMs were averaged to obtain normalized values, then the 3D texture features were calculated. A preliminary examination was performed on a 3D image volume (64 x 64 x 64 voxels). Our analysis confirmed that the proposed technique is capable of extracting the 3D texture features from the extended GLCMs approach. It is a simple and comprehensive technique that can contribute to the 3D image analysis.

  5. Computational efficient segmentation of cell nuclei in 2D and 3D fluorescent micrographs

    NASA Astrophysics Data System (ADS)

    De Vylder, Jonas; Philips, Wilfried

    2011-02-01

    This paper proposes a new segmentation technique developed for the segmentation of cell nuclei in both 2D and 3D fluorescent micrographs. The proposed method can deal with both blurred edges as with touching nuclei. Using a dual scan line algorithm its both memory as computational efficient, making it interesting for the analysis of images coming from high throughput systems or the analysis of 3D microscopic images. Experiments show good results, i.e. recall of over 0.98.

  6. Degenerative changes of the vertebral column in spatial imaging of 3D computed tomography.

    PubMed

    Krupski, Witold; Majcher, Piotr; Krupski, Mirosław; Fatyga, Marek; Złomaniec, Janusz

    2002-01-01

    In a group of 38 patients with radicular pain syndromes diagnostic value of spatial reconstructions with computed tomography (3D CT) was assessed in examinations of bone structures of the vertebral column. It was found that 3D CT is a technique of choice in the assessment of degenerative stenosis of the vertebral canal, internal surface of the vertebral canal, bone narrowings of intervertebral foramens and lateral recesses.

  7. 3-D field computation: The near-triumph of commerical codes

    SciTech Connect

    Turner, L.R.

    1995-07-01

    In recent years, more and more of those who design and analyze magnets and other devices are using commercial codes rather than developing their own. This paper considers the commercial codes and the features available with them. Other recent trends with 3-D field computation include parallel computation and visualization methods such as virtual reality systems.

  8. Computer vision and artificial intelligence in mammography.

    PubMed

    Vyborny, C J; Giger, M L

    1994-03-01

    The revolution in digital computer technology that has made possible new and sophisticated imaging techniques may next influence the interpretation of radiologic images. In mammography, computer vision and artificial intelligence techniques have been used successfully to detect or to characterize abnormalities on digital images. Radiologists supplied with this information often perform better at mammographic detection or characterization tasks in observer studies than do unaided radiologists. This technology therefore could decrease errors in mammographic interpretation that continue to plague human observers.

  9. Report on Computer Programs for Robotic Vision

    NASA Technical Reports Server (NTRS)

    Cunningham, R. T.; Kan, E. P.

    1986-01-01

    Collection of programs supports robotic research. Report describes computer-vision software library NASA's Jet Propulsion Laboratory. Programs evolved during past 10 years of research into robotics. Collection includes low- and high-level image-processing software proved in applications ranging from factory automation to spacecraft tracking and grappling. Programs fall into several overlapping categories. Image utilities category are low-level routines that provide computer access to image data and some simple graphical capabilities for displaying results of image processing.

  10. Report on Computer Programs for Robotic Vision

    NASA Technical Reports Server (NTRS)

    Cunningham, R. T.; Kan, E. P.

    1986-01-01

    Collection of programs supports robotic research. Report describes computer-vision software library NASA's Jet Propulsion Laboratory. Programs evolved during past 10 years of research into robotics. Collection includes low- and high-level image-processing software proved in applications ranging from factory automation to spacecraft tracking and grappling. Programs fall into several overlapping categories. Image utilities category are low-level routines that provide computer access to image data and some simple graphical capabilities for displaying results of image processing.

  11. A new 3D computational model for shaped charge jet breakup

    SciTech Connect

    Zernow, L.; Chapyak, E.J.; Mosso, S.J.

    1996-09-01

    This paper reviews prior 1D and 2D axisymmetric, analytical and computational studies, as well as empirical studies of the shaped charge jet particulation problem and discusses their associated insights and problems. It proposes a new 3D computational model of the particulation process, based upon a simplified version of the observed counter-rotating, double helical surface perturbations, found on softly recovered shaped charge jet particles, from both copper and tantalum jets. This 3D approach contrasts with the random, axisymmetric surface perturbations which have previously been used, to try to infer the observed length distribution of jet particles, on the basis of the most unstable wavelength concept, which leads to the expectation of a continuous distribution of particle lengths. The 3D model, by its very nature, leads to a non-random, periodic distribution of potential initial necking loci, on alternate sides of the stretching jet. This in turn infers a potentially periodic, overlapping, multi-modal distribution of associated jet particle lengths. Since it is unlikely that all potential initial necking sites will be activated simultaneously, the 3D model also suggests that longer jet particles containing partial, but unseparated necks, should be observed fairly often. The computational analysis is in its very early stages and the problems involved in inserting the two helical grooves and in defining the initial conditions and boundary conditions for the computation will be discussed. Available initial results from the 3D computation will be discussed and interpreted.

  12. Revitalizing the Space Shuttle's Thermal Protection System with Reverse Engineering and 3D Vision Technology

    NASA Technical Reports Server (NTRS)

    Wilson, Brad; Galatzer, Yishai

    2008-01-01

    The Space Shuttle is protected by a Thermal Protection System (TPS) made of tens of thousands of individually shaped heat protection tile. With every flight, tiles are damaged on take-off and return to earth. After each mission, the heat tiles must be fixed or replaced depending on the level of damage. As part of the return to flight mission, the TPS requirements are more stringent, leading to a significant increase in heat tile replacements. The replacement operation requires scanning tile cavities, and in some cases the actual tiles. The 3D scan data is used to reverse engineer each tile into a precise CAD model, which in turn, is exported to a CAM system for the manufacture of the heat protection tile. Scanning is performed while other activities are going on in the shuttle processing facility. Many technicians work simultaneously on the space shuttle structure, which results in structural movements and vibrations. This paper will cover a portable, ultra-fast data acquisition approach used to scan surfaces in this unstable environment.

  13. Multi-camera and structured-light vision system (MSVS) for dynamic high-accuracy 3D measurements of railway tunnels.

    PubMed

    Zhan, Dong; Yu, Long; Xiao, Jian; Chen, Tanglong

    2015-04-14

    Railway tunnel 3D clearance inspection is critical to guaranteeing railway operation safety. However, it is a challenge to inspect railway tunnel 3D clearance using a vision system, because both the spatial range and field of view (FOV) of such measurements are quite large. This paper summarizes our work on dynamic railway tunnel 3D clearance inspection based on a multi-camera and structured-light vision system (MSVS). First, the configuration of the MSVS is described. Then, the global calibration for the MSVS is discussed in detail. The onboard vision system is mounted on a dedicated vehicle and is expected to suffer from multiple degrees of freedom vibrations brought about by the running vehicle. Any small vibration can result in substantial measurement errors. In order to overcome this problem, a vehicle motion deviation rectifying method is investigated. Experiments using the vision inspection system are conducted with satisfactory online measurement results.

  14. Multi-Camera and Structured-Light Vision System (MSVS) for Dynamic High-Accuracy 3D Measurements of Railway Tunnels

    PubMed Central

    Zhan, Dong; Yu, Long; Xiao, Jian; Chen, Tanglong

    2015-01-01

    Railway tunnel 3D clearance inspection is critical to guaranteeing railway operation safety. However, it is a challenge to inspect railway tunnel 3D clearance using a vision system, because both the spatial range and field of view (FOV) of such measurements are quite large. This paper summarizes our work on dynamic railway tunnel 3D clearance inspection based on a multi-camera and structured-light vision system (MSVS). First, the configuration of the MSVS is described. Then, the global calibration for the MSVS is discussed in detail. The onboard vision system is mounted on a dedicated vehicle and is expected to suffer from multiple degrees of freedom vibrations brought about by the running vehicle. Any small vibration can result in substantial measurement errors. In order to overcome this problem, a vehicle motion deviation rectifying method is investigated. Experiments using the vision inspection system are conducted with satisfactory online measurement results. PMID:25875190

  15. Computing elastic moduli on 3-D X-ray computed tomography image stacks

    NASA Astrophysics Data System (ADS)

    Garboczi, E. J.; Kushch, V. I.

    2015-03-01

    A numerical task of current interest is to compute the effective elastic properties of a random composite material by operating on a 3D digital image of its microstructure obtained via X-ray computed tomography (CT). The 3-D image is usually sub-sampled since an X-ray CT image is typically of order 10003 voxels or larger, which is considered to be a very large finite element problem. Two main questions for the validity of any such study are then: can the sub-sample size be made sufficiently large to capture enough of the important details of the random microstructure so that the computed moduli can be thought of as accurate, and what boundary conditions should be chosen for these sub-samples? This paper contributes to the answer of both questions by studying a simulated X-ray CT cylindrical microstructure with three phases, cut from a random model system with known elastic properties. A new hybrid numerical method is introduced, which makes use of finite element solutions coupled with exact solutions for elastic moduli of square arrays of parallel cylindrical fibers. The new method allows, in principle, all of the microstructural data to be used when the X-ray CT image is in the form of a cylinder, which is often the case. Appendix A describes a similar algorithm for spherical sub-samples, which may be of use when examining the mechanical properties of particles. Cubic sub-samples are also taken from this simulated X-ray CT structure to investigate the effect of two different kinds of boundary conditions: forced periodic and fixed displacements. It is found that using forced periodic displacements on the non-geometrically periodic cubic sub-samples always gave more accurate results than using fixed displacements, although with about the same precision. The larger the cubic sub-sample, the more accurate and precise was the elastic computation, and using the complete cylindrical sample with the new method gave still more accurate and precise results. Fortran 90

  16. 3D Slicer as an Image Computing Platform for the Quantitative Imaging Network

    PubMed Central

    Fedorov, Andriy; Beichel, Reinhard; Kalpathy-Cramer, Jayashree; Finet, Julien; Fillion-Robin, Jean-Christophe; Pujol, Sonia; Bauer, Christian; Jennings, Dominique; Fennessy, Fiona; Sonka, Milan; Buatti, John; Aylward, Stephen; Miller, James V.; Pieper, Steve; Kikinis, Ron

    2012-01-01

    Quantitative analysis has tremendous but mostly unrealized potential in healthcare to support objective and accurate interpretation of the clinical imaging. In 2008, the National Cancer Institute began building the Quantitative Imaging Network (QIN) initiative with the goal of advancing quantitative imaging in the context of personalized therapy and evaluation of treatment response. Computerized analysis is an important component contributing to reproducibility and efficiency of the quantitative imaging techniques. The success of quantitative imaging is contingent on robust analysis methods and software tools to bring these methods from bench to bedside. 3D Slicer is a free open source software application for medical image computing. As a clinical research tool, 3D Slicer is similar to a radiology workstation that supports versatile visualizations but also provides advanced functionality such as automated segmentation and registration for a variety of application domains. Unlike a typical radiology workstation, 3D Slicer is free and is not tied to specific hardware. As a programming platform, 3D Slicer facilitates translation and evaluation of the new quantitative methods by allowing the biomedical researcher to focus on the implementation of the algorithm, and providing abstractions for the common tasks of data communication, visualization and user interface development. Compared to other tools that provide aspects of this functionality, 3D Slicer is fully open source and can be readily extended and redistributed. In addition, 3D Slicer is designed to facilitate the development of new functionality in the form of 3D Slicer extensions. In this paper, we present an overview of 3D Slicer as a platform for prototyping, development and evaluation of image analysis tools for clinical research applications. To illustrate the utility of the platform in the scope of QIN, we discuss several use cases of 3D Slicer by the existing QIN teams, and we elaborate on the future

  17. Automatic Quality Inspection of Percussion Cap Mass Production by Means of 3D Machine Vision and Machine Learning Techniques

    NASA Astrophysics Data System (ADS)

    Tellaeche, A.; Arana, R.; Ibarguren, A.; Martínez-Otzeta, J. M.

    The exhaustive quality control is becoming very important in the world's globalized market. One of these examples where quality control becomes critical is the percussion cap mass production. These elements must achieve a minimum tolerance deviation in their fabrication. This paper outlines a machine vision development using a 3D camera for the inspection of the whole production of percussion caps. This system presents multiple problems, such as metallic reflections in the percussion caps, high speed movement of the system and mechanical errors and irregularities in percussion cap placement. Due to these problems, it is impossible to solve the problem by traditional image processing methods, and hence, machine learning algorithms have been tested to provide a feasible classification of the possible errors present in the percussion caps.

  18. Visualization of the 3-D topography of the optic nerve head through a passive stereo vision model

    NASA Astrophysics Data System (ADS)

    Ramirez, Juan M.; Mitra, Sunanda; Morales, Jose

    1999-01-01

    This paper describes a system for surface recovery and visualization of the 3D topography of the optic nerve head, as support of early diagnosis and follow up to glaucoma. In stereo vision, depth information is obtained from triangulation of corresponding points in a pair of stereo images. In this paper, the use of the cepstrum transformation as a disparity measurement technique between corresponding windows of different block sizes is described. This measurement process is embedded within a coarse-to-fine depth-from-stereo algorithm, providing an initial range map with the depth information encoded as gray levels. These sparse depth data are processed through a cubic B-spline interpolation technique in order to obtain a smoother representation. This methodology is being especially refined to be used with medical images for clinical evaluation of some eye diseases such as open angle glaucoma, and is currently under testing for clinical evaluation and analysis of reproducibility and accuracy.

  19. 3D Multislice and Cone-beam Computed Tomography Systems for Dental Identification.

    PubMed

    Eliášová, Hana; Dostálová, Taťjana

    3D Multislice and Cone-beam computed tomography (CBCT) in forensic odontology has been shown to be useful not only in terms of one or a few of dead bodies but also in multiple fatality incidents. 3D Multislice and Cone-beam computed tomography and digital radiography were demonstrated in a forensic examination form. 3D images of the skull and teeth were analysed and validated for long ante mortem/post mortem intervals. The image acquisition was instantaneous; the images were able to be optically enlarged, measured, superimposed and compared prima vista or using special software and exported as a file. Digital radiology and computer tomography has been shown to be important both in common criminalistics practices and in multiple fatality incidents. Our study demonstrated that CBCT imaging offers less image artifacts, low image reconstruction times, mobility of the unit and considerably lower equipment cost.

  20. Computation and parallel implementation for early vision

    NASA Technical Reports Server (NTRS)

    Gualtieri, J. Anthony

    1990-01-01

    The problem of early vision is to transform one or more retinal illuminance images-pixel arrays-to image representations built out of such primitive visual features such as edges, regions, disparities, and clusters. These transformed representations form the input to later vision stages that perform higher level vision tasks including matching and recognition. Researchers developed algorithms for: (1) edge finding in the scale space formulation; (2) correlation methods for computing matches between pairs of images; and (3) clustering of data by neural networks. These algorithms are formulated for parallel implementation of SIMD machines, such as the Massively Parallel Processor, a 128 x 128 array processor with 1024 bits of local memory per processor. For some cases, researchers can show speedups of three orders of magnitude over serial implementations.

  1. A computational model for dynamic vision

    NASA Technical Reports Server (NTRS)

    Moezzi, Saied; Weymouth, Terry E.

    1990-01-01

    This paper describes a novel computational model for dynamic vision which promises to be both powerful and robust. Furthermore the paradigm is ideal for an active vision system where camera vergence changes dynamically. Its basis is the retinotopically indexed object-centered encoding of the early visual information. Specifically, the relative distances of objects to a set of referents is encoded in image registered maps. To illustrate the efficacy of the method, it is applied to the problem of dynamic stereo vision. Integration of depth information over multiple frames obtained by a moving robot generally requires precise information about the relative camera position from frame to frame. Usually, this information can only be approximated. The method facilitates the integration of depth information without direct use or knowledge of camera motion.

  2. Meta!Blast computer game: a pipeline from science to 3D art to education

    NASA Astrophysics Data System (ADS)

    Schneller, William; Campbell, P. J.; Bassham, Diane; Wurtele, Eve Syrkin

    2012-03-01

    Meta!Blast (http://www.metablast.org) is designed to address the challenges students often encounter in understanding cell and metabolic biology. Developed by faculty and students in biology, biochemistry, computer science, game design, pedagogy, art and story, Meta!Blast is being created using Maya (http://usa.autodesk.com/maya/) and the Unity 3D (http://unity3d.com/) game engine, for Macs and PCs in classrooms; it has also been exhibited in an immersive environment. Here, we describe the pipeline from protein structural data and holographic information to art to the threedimensional (3D) environment to the game engine, by which we provide a publicly-available interactive 3D cellular world that mimics a photosynthetic plant cell.

  3. Implementation of Headtracking and 3D Stereo with Unity and VRPN for Computer Simulations

    NASA Technical Reports Server (NTRS)

    Noyes, Matthew A.

    2013-01-01

    This paper explores low-cost hardware and software methods to provide depth cues traditionally absent in monocular displays. The use of a VRPN server in conjunction with a Microsoft Kinect and/or Nintendo Wiimote to provide head tracking information to a Unity application, and NVIDIA 3D Vision for retinal disparity support, is discussed. Methods are suggested to implement this technology with NASA's EDGE simulation graphics package, along with potential caveats. Finally, future applications of this technology to astronaut crew training, particularly when combined with an omnidirectional treadmill for virtual locomotion and NASA's ARGOS system for reduced gravity simulation, are discussed.

  4. Encoding, learning, and spatial updating of multiple object locations specified by 3-D sound, spatial language, and vision.

    PubMed

    Klatzky, Roberta L; Lippa, Yvonne; Loomis, Jack M; Golledge, Reginald G

    2003-03-01

    Participants standing at an origin learned the distance and azimuth of target objects that were specified by 3-D sound, spatial language, or vision. We tested whether the ensuing target representations functioned equivalently across modalities for purposes of spatial updating. In experiment 1, participants localized targets by pointing to each and verbalizing its distance, both directly from the origin and at an indirect waypoint. In experiment 2, participants localized targets by walking to each directly from the origin and via an indirect waypoint. Spatial updating bias was estimated by the spatial-coordinate difference between indirect and direct localization; noise from updating was estimated by the difference in variability of localization. Learning rate and noise favored vision over the two auditory modalities. For all modalities, bias during updating tended to move targets forward, comparably so for three and five targets and for forward and rightward indirect-walking directions. Spatial language produced additional updating bias and noise from updating. Although spatial representations formed from language afford updating, they do not function entirely equivalently to those from intrinsically spatial modalities.

  5. JPL Robotics Laboratory computer vision software library

    NASA Technical Reports Server (NTRS)

    Cunningham, R.

    1984-01-01

    The past ten years of research on computer vision have matured into a powerful real time system comprised of standardized commercial hardware, computers, and pipeline processing laboratory prototypes, supported by anextensive set of image processing algorithms. The software system was constructed to be transportable via the choice of a popular high level language (PASCAL) and a widely used computer (VAX-11/750), it comprises a whole realm of low level and high level processing software that has proven to be versatile for applications ranging from factory automation to space satellite tracking and grappling.

  6. Computational methods for constructing protein structure models from 3D electron microscopy maps.

    PubMed

    Esquivel-Rodríguez, Juan; Kihara, Daisuke

    2013-10-01

    Protein structure determination by cryo-electron microscopy (EM) has made significant progress in the past decades. Resolutions of EM maps have been improving as evidenced by recently reported structures that are solved at high resolutions close to 3Å. Computational methods play a key role in interpreting EM data. Among many computational procedures applied to an EM map to obtain protein structure information, in this article we focus on reviewing computational methods that model protein three-dimensional (3D) structures from a 3D EM density map that is constructed from two-dimensional (2D) maps. The computational methods we discuss range from de novo methods, which identify structural elements in an EM map, to structure fitting methods, where known high resolution structures are fit into a low-resolution EM map. A list of available computational tools is also provided.

  7. 3D computation of non-linear eddy currents: Variational method and superconducting cubic bulk

    NASA Astrophysics Data System (ADS)

    Pardo, Enric; Kapolka, Milan

    2017-09-01

    Computing the electric eddy currents in non-linear materials, such as superconductors, is not straightforward. The design of superconducting magnets and power applications needs electromagnetic computer modeling, being in many cases a three-dimensional (3D) problem. Since 3D problems require high computing times, novel time-efficient modeling tools are highly desirable. This article presents a novel computing modeling method based on a variational principle. The self-programmed implementation uses an original minimization method, which divides the sample into sectors. This speeds-up the computations with no loss of accuracy, while enabling efficient parallelization. This method could also be applied to model transients in linear materials or networks of non-linear electrical elements. As example, we analyze the magnetization currents of a cubic superconductor. This 3D situation remains unknown, in spite of the fact that it is often met in material characterization and bulk applications. We found that below the penetration field and in part of the sample, current flux lines are not rectangular and significantly bend in the direction parallel to the applied field. In conclusion, the presented numerical method is able to time-efficiently solve fully 3D situations without loss of accuracy.

  8. Efficient curve-skeleton computation for the analysis of biomedical 3d images - biomed 2010.

    PubMed

    Brun, Francesco; Dreossi, Diego

    2010-01-01

    Advances in three dimensional (3D) biomedical imaging techniques, such as magnetic resonance (MR) and computed tomography (CT), make it easy to reconstruct high quality 3D models of portions of human body and other biological specimens. A major challenge lies in the quantitative analysis of the resulting models thus allowing a more comprehensive characterization of the object under investigation. An interesting approach is based on curve-skeleton (or medial axis) extraction, which gives basic information concerning the topology and the geometry. Curve-skeletons have been applied in the analysis of vascular networks and the diagnosis of tracheal stenoses as well as a 3D flight path in virtual endoscopy. However curve-skeleton computation is a crucial task. An effective skeletonization algorithm was introduced by N. Cornea in [1] but it lacks in computational performances. Thanks to the advances in imaging techniques the resolution of 3D images is increasing more and more, therefore there is the need for efficient algorithms in order to analyze significant Volumes of Interest (VOIs). In the present paper an improved skeletonization algorithm based on the idea proposed in [1] is presented. A computational comparison between the original and the proposed method is also reported. The obtained results show that the proposed method allows a significant computational improvement making more appealing the adoption of the skeleton representation in biomedical image analysis applications.

  9. Introduction of the ASP3D Computer Program for Unsteady Aerodynamic and Aeroelastic Analyses

    NASA Technical Reports Server (NTRS)

    Batina, John T.

    2005-01-01

    A new computer program has been developed called ASP3D (Advanced Small Perturbation 3D), which solves the small perturbation potential flow equation in an advanced form including mass-consistent surface and trailing wake boundary conditions, and entropy, vorticity, and viscous effects. The purpose of the program is for unsteady aerodynamic and aeroelastic analyses, especially in the nonlinear transonic flight regime. The program exploits the simplicity of stationary Cartesian meshes with the movement or deformation of the configuration under consideration incorporated into the solution algorithm through a planar surface boundary condition. The new ASP3D code is the result of a decade of developmental work on improvements to the small perturbation formulation, performed while the author was employed as a Senior Research Scientist in the Configuration Aerodynamics Branch at the NASA Langley Research Center. The ASP3D code is a significant improvement to the state-of-the-art for transonic aeroelastic analyses over the CAP-TSD code (Computational Aeroelasticity Program Transonic Small Disturbance), which was developed principally by the author in the mid-1980s. The author is in a unique position as the developer of both computer programs to compare, contrast, and ultimately make conclusions regarding the underlying formulations and utility of each code. The paper describes the salient features of the ASP3D code including the rationale for improvements in comparison with CAP-TSD. Numerous results are presented to demonstrate the ASP3D capability. The general conclusion is that the new ASP3D capability is superior to the older CAP-TSD code because of the myriad improvements developed and incorporated.

  10. Computer-Assisted Hepatocellular Carcinoma Ablation Planning Based on 3-D Ultrasound Imaging.

    PubMed

    Li, Kai; Su, Zhongzhen; Xu, Erjiao; Guan, Peishan; Li, Liu-Jun; Zheng, Rongqin

    2016-08-01

    To evaluate computer-assisted hepatocellular carcinoma (HCC) ablation planning based on 3-D ultrasound, 3-D ultrasound images of 60 HCC lesions from 58 patients were obtained and transferred to a research toolkit. Compared with virtual manual ablation planning (MAP), virtual computer-assisted ablation planning (CAP) consumed less time and needle insertion numbers and exhibited a higher rate of complete tumor coverage and lower rate of critical structure injury. In MAP, junior operators used less time, but had more critical structure injury than senior operators. For large lesions, CAP performed better than MAP. For lesions near critical structures, CAP resulted in better outcomes than MAP. Compared with MAP, CAP based on 3-D ultrasound imaging was more effective and achieved a higher rate of complete tumor coverage and a lower rate of critical structure injury; it is especially useful for junior operators and with large lesions, and lesions near critical structures.

  11. Computer-assisted 3D planned corrective osteotomies in eight malunited radius fractures.

    PubMed

    Walenkamp, M M J; de Muinck Keizer, R J O; Dobbe, J G G; Streekstra, G J; Goslings, J C; Kloen, P; Strackee, S D; Schep, N W L

    2015-08-01

    In corrective osteotomy of the radius, detailed preoperative planning is essential to optimising functional outcome. However, complex malunions are not completely addressed with conventional preoperative planning. Computer-assisted preoperative planning may optimise the results of corrective osteotomy of the radius. We analysed the pre- and postoperative radiological result of computer-assisted 3D planned corrective osteotomy in a series of patients with a malunited radius and assessed postoperative function. We included eight patients aged 13-64 who underwent a computer-assisted 3D planned corrective osteotomy of the radius for the treatment of a symptomatic radius malunion. We evaluated pre- and postoperative residual malpositioning on 3D reconstructions as expressed in six positioning parameters (three displacements along and three rotations about the axes of a 3D anatomical coordinate system) and assessed postoperative wrist range of motion. In this small case series, dorsopalmar tilt was significantly improved (p = 0.05). Ulnoradial shift, however, increased by the correction osteotomy (6 of 8 cases, 75 %). Postoperative 3D evaluation revealed improved positioning parameters for patients in axial rotational alignment (62.5 %), radial inclination (75 %), proximodistal shift (83 %) and volodorsal shift (88 %), although the cohort was not large enough to confirm this by statistical significance. All but one patient experienced improved range of motion (88 %). Computer-assisted 3D planning ameliorates alignment of radial malunions and improves functional results in patients with a symptomatic malunion of the radius. Further development is required to improve transfer of the planned position to the intra-operative bone. Level of evidence IV.

  12. A Microscopic Optically Tracking Navigation System That Uses High-resolution 3D Computer Graphics.

    PubMed

    Yoshino, Masanori; Saito, Toki; Kin, Taichi; Nakagawa, Daichi; Nakatomi, Hirofumi; Oyama, Hiroshi; Saito, Nobuhito

    2015-01-01

    Three-dimensional (3D) computer graphics (CG) are useful for preoperative planning of neurosurgical operations. However, application of 3D CG to intraoperative navigation is not widespread because existing commercial operative navigation systems do not show 3D CG in sufficient detail. We have developed a microscopic optically tracking navigation system that uses high-resolution 3D CG. This article presents the technical details of our microscopic optically tracking navigation system. Our navigation system consists of three components: the operative microscope, registration, and the image display system. An optical tracker was attached to the microscope to monitor the position and attitude of the microscope in real time; point-pair registration was used to register the operation room coordinate system, and the image coordinate system; and the image display system showed the 3D CG image in the field-of-view of the microscope. Ten neurosurgeons (seven males, two females; mean age 32.9 years) participated in an experiment to assess the accuracy of this system using a phantom model. Accuracy of our system was compared with the commercial system. The 3D CG provided by the navigation system coincided well with the operative scene under the microscope. Target registration error for our system was 2.9 ± 1.9 mm. Our navigation system provides a clear image of the operation position and the surrounding structures. Systems like this may reduce intraoperative complications.

  13. A Microscopic Optically Tracking Navigation System That Uses High-resolution 3D Computer Graphics

    PubMed Central

    YOSHINO, Masanori; SAITO, Toki; KIN, Taichi; NAKAGAWA, Daichi; NAKATOMI, Hirofumi; OYAMA, Hiroshi; SAITO, Nobuhito

    2015-01-01

    Three-dimensional (3D) computer graphics (CG) are useful for preoperative planning of neurosurgical operations. However, application of 3D CG to intraoperative navigation is not widespread because existing commercial operative navigation systems do not show 3D CG in sufficient detail. We have developed a microscopic optically tracking navigation system that uses high-resolution 3D CG. This article presents the technical details of our microscopic optically tracking navigation system. Our navigation system consists of three components: the operative microscope, registration, and the image display system. An optical tracker was attached to the microscope to monitor the position and attitude of the microscope in real time; point-pair registration was used to register the operation room coordinate system, and the image coordinate system; and the image display system showed the 3D CG image in the field-of-view of the microscope. Ten neurosurgeons (seven males, two females; mean age 32.9 years) participated in an experiment to assess the accuracy of this system using a phantom model. Accuracy of our system was compared with the commercial system. The 3D CG provided by the navigation system coincided well with the operative scene under the microscope. Target registration error for our system was 2.9 ± 1.9 mm. Our navigation system provides a clear image of the operation position and the surrounding structures. Systems like this may reduce intraoperative complications. PMID:26226982

  14. Novel fully integrated computer system for custom footwear: from 3D digitization to manufacturing

    NASA Astrophysics Data System (ADS)

    Houle, Pascal-Simon; Beaulieu, Eric; Liu, Zhaoheng

    1998-03-01

    This paper presents a recently developed custom footwear system, which integrates 3D digitization technology, range image fusion techniques, a 3D graphical environment for corrective actions, parametric curved surface representation and computer numerical control (CNC) machining. In this system, a support designed with the help of biomechanics experts can stabilize the foot in a correct and neutral position. The foot surface is then captured by a 3D camera using active ranging techniques. A software using a library of documented foot pathologies suggests corrective actions on the orthosis. Three kinds of deformations can be achieved. The first method uses previously scanned pad surfaces by our 3D scanner, which can be easily mapped onto the foot surface to locally modify the surface shape. The second kind of deformation is construction of B-Spline surfaces by manipulating control points and modifying knot vectors in a 3D graphical environment to build desired deformation. The last one is a manual electronic 3D pen, which may be of different shapes and sizes, and has an adjustable 'pressure' information. All applied deformations should respect a G1 surface continuity, which ensure that the surface can accustom a foot. Once the surface modification process is completed, the resulting data is sent to manufacturing software for CNC machining.

  15. Computer vision cracks the leaf code.

    PubMed

    Wilf, Peter; Zhang, Shengping; Chikkerur, Sharat; Little, Stefan A; Wing, Scott L; Serre, Thomas

    2016-03-22

    Understanding the extremely variable, complex shape and venation characters of angiosperm leaves is one of the most challenging problems in botany. Machine learning offers opportunities to analyze large numbers of specimens, to discover novel leaf features of angiosperm clades that may have phylogenetic significance, and to use those characters to classify unknowns. Previous computer vision approaches have primarily focused on leaf identification at the species level. It remains an open question whether learning and classification are possible among major evolutionary groups such as families and orders, which usually contain hundreds to thousands of species each and exhibit many times the foliar variation of individual species. Here, we tested whether a computer vision algorithm could use a database of 7,597 leaf images from 2,001 genera to learn features of botanical families and orders, then classify novel images. The images are of cleared leaves, specimens that are chemically bleached, then stained to reveal venation. Machine learning was used to learn a codebook of visual elements representing leaf shape and venation patterns. The resulting automated system learned to classify images into families and orders with a success rate many times greater than chance. Of direct botanical interest, the responses of diagnostic features can be visualized on leaf images as heat maps, which are likely to prompt recognition and evolutionary interpretation of a wealth of novel morphological characters. With assistance from computer vision, leaves are poised to make numerous new contributions to systematic and paleobotanical studies.

  16. Computer vision cracks the leaf code

    PubMed Central

    Wilf, Peter; Zhang, Shengping; Chikkerur, Sharat; Little, Stefan A.; Wing, Scott L.; Serre, Thomas

    2016-01-01

    Understanding the extremely variable, complex shape and venation characters of angiosperm leaves is one of the most challenging problems in botany. Machine learning offers opportunities to analyze large numbers of specimens, to discover novel leaf features of angiosperm clades that may have phylogenetic significance, and to use those characters to classify unknowns. Previous computer vision approaches have primarily focused on leaf identification at the species level. It remains an open question whether learning and classification are possible among major evolutionary groups such as families and orders, which usually contain hundreds to thousands of species each and exhibit many times the foliar variation of individual species. Here, we tested whether a computer vision algorithm could use a database of 7,597 leaf images from 2,001 genera to learn features of botanical families and orders, then classify novel images. The images are of cleared leaves, specimens that are chemically bleached, then stained to reveal venation. Machine learning was used to learn a codebook of visual elements representing leaf shape and venation patterns. The resulting automated system learned to classify images into families and orders with a success rate many times greater than chance. Of direct botanical interest, the responses of diagnostic features can be visualized on leaf images as heat maps, which are likely to prompt recognition and evolutionary interpretation of a wealth of novel morphological characters. With assistance from computer vision, leaves are poised to make numerous new contributions to systematic and paleobotanical studies. PMID:26951664

  17. The Effects of 3D Computer Simulation on Biology Students' Achievement and Memory Retention

    ERIC Educational Resources Information Center

    Elangovan, Tavasuria; Ismail, Zurida

    2014-01-01

    A quasi experimental study was conducted for six weeks to determine the effectiveness of two different 3D computer simulation based teaching methods, that is, realistic simulation and non-realistic simulation on Form Four Biology students' achievement and memory retention in Perak, Malaysia. A sample of 136 Form Four Biology students in Perak,…

  18. Using 3D Computer Graphics Multimedia to Motivate Preservice Teachers' Learning of Geometry and Pedagogy

    ERIC Educational Resources Information Center

    Goodson-Espy, Tracy; Lynch-Davis, Kathleen; Schram, Pamela; Quickenton, Art

    2010-01-01

    This paper describes the genesis and purpose of our geometry methods course, focusing on a geometry-teaching technology we created using NVIDIA[R] Chameleon demonstration. This article presents examples from a sequence of lessons centered about a 3D computer graphics demonstration of the chameleon and its geometry. In addition, we present data…

  19. Analysis of thoracic aorta hemodynamics using 3D particle tracking velocimetry and computational fluid dynamics.

    PubMed

    Gallo, Diego; Gülan, Utku; Di Stefano, Antonietta; Ponzini, Raffaele; Lüthi, Beat; Holzner, Markus; Morbiducci, Umberto

    2014-09-22

    Parallel to the massive use of image-based computational hemodynamics to study the complex flow establishing in the human aorta, the need for suitable experimental techniques and ad hoc cases for the validation and benchmarking of numerical codes has grown more and more. Here we present a study where the 3D pulsatile flow in an anatomically realistic phantom of human ascending aorta is investigated both experimentally and computationally. The experimental study uses 3D particle tracking velocimetry (PTV) to characterize the flow field in vitro, while finite volume method is applied to numerically solve the governing equations of motion in the same domain, under the same conditions. Our findings show that there is an excellent agreement between computational and measured flow fields during the forward flow phase, while the agreement is poorer during the reverse flow phase. In conclusion, here we demonstrate that 3D PTV is very suitable for a detailed study of complex unsteady flows as in aorta and for validating computational models of aortic hemodynamics. In a future step, it will be possible to take advantage from the ability of 3D PTV to evaluate velocity fluctuations and, for this reason, to gain further knowledge on the process of transition to turbulence occurring in the thoracic aorta.

  20. Adaptive 3D single-block grids for the computation of viscous flows around wings

    SciTech Connect

    Hagmeijer, R.; Kok, J.C.

    1996-12-31

    A robust algorithm for the adaption of a 3D single-block structured grid suitable for the computation of viscous flows around a wing is presented and demonstrated by application to the ONERA M6 wing. The effects of grid adaption on the flow solution and accuracy improvements is analyzed. Reynolds number variations are studied.

  1. The Effects of 3D Computer Simulation on Biology Students' Achievement and Memory Retention

    ERIC Educational Resources Information Center

    Elangovan, Tavasuria; Ismail, Zurida

    2014-01-01

    A quasi experimental study was conducted for six weeks to determine the effectiveness of two different 3D computer simulation based teaching methods, that is, realistic simulation and non-realistic simulation on Form Four Biology students' achievement and memory retention in Perak, Malaysia. A sample of 136 Form Four Biology students in Perak,…

  2. Analyzing 3D xylem networks in Vitis vinifera using High Resolution Computed Tomography (HRCT)

    USDA-ARS?s Scientific Manuscript database

    Recent developments in High Resolution Computed Tomography (HRCT) have made it possible to visualize three dimensional (3D) xylem networks without time consuming, labor intensive physical sectioning. Here we describe a new method to visualize complex vessel networks in plants and produce a quantitat...

  3. Enhancement of temporal bone anatomy learning with computer 3D rendered imaging software.

    PubMed

    Venail, Frederic; Deveze, Arnaud; Lallemant, Benjamin; Guevara, Nicolas; Mondain, Michel

    2010-01-01

    To determine whether the use of 3D anatomical models is helpful to students and enhances their anatomical knowledge. First year undergraduate students on the speech therapy or hearing aid practitioner courses attended either a lecture alone or a lecture followed by a 3D anatomy based tutorial, the latter which was also attended by ENT residents. Participants who received the tutorial were free to use the 3D model on the university computers or on their home computer and were then asked to answer a satisfaction questionnaire. At the end of the first year examinations, the grades of the undergraduate students were compared between the lecture alone group and lecture plus tutorial group. Generally, all participants found this new tool interesting and user-friendly for the learning of temporal bone anatomy. However, most also considered the help of a teacher indispensable to guide them through the virtual dissection. First year undergraduate students who received the 3D anatomy tutorial performed significantly better during their end of year examination compared to those receiving a lecture alone, particularly concerning the more difficult questions. The 3D anatomical software, used in parallel with traditional teaching methods, such as lectures and cadaver dissection, appears to be a promising tool to improve student learning of temporal bone anatomy.

  4. 3D Computational Modeling of Proteins Using Sparse Paramagnetic NMR Data.

    PubMed

    Pilla, Kala Bharath; Otting, Gottfried; Huber, Thomas

    2017-01-01

    Computational modeling of proteins using evolutionary or de novo approaches offers rapid structural characterization, but often suffers from low success rates in generating high quality models comparable to the accuracy of structures observed in X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. A computational/experimental hybrid approach incorporating sparse experimental restraints in computational modeling algorithms drastically improves reliability and accuracy of 3D models. This chapter discusses the use of structural information obtained from various paramagnetic NMR measurements and demonstrates computational algorithms implementing pseudocontact shifts as restraints to determine the structure of proteins at atomic resolution.

  5. 3D dynamic computer model of the head-neck complex.

    PubMed

    Sierra, Daniel A; Enderle, John D

    2006-01-01

    A 3D dynamic computer model for the movement of the head is presented that incorporates anatomically correct information about the diverse elements forming the system. The skeleton is considered as a set of interconnected rigid 3D bodies following the Newton-Euler laws of movement. The muscles are modeled using Enderle's linear model. Finally, the soft tissues, namely the ligaments, intervertebral disks, and zigapophysial joints, are modeled using the finite elements approach. The model is intended to study the neural network that controls movement and maintains the balance of the head-neck complex during eye movements.

  6. Fast precalculated triangular mesh algorithm for 3D binary computer-generated holograms.

    PubMed

    Yang, Fan; Kaczorowski, Andrzej; Wilkinson, Tim D

    2014-12-10

    A new method for constructing computer-generated holograms using a precalculated triangular mesh is presented. The speed of calculation can be increased dramatically by exploiting both the precalculated base triangle and GPU parallel computing. Unlike algorithms using point-based sources, this method can reconstruct a more vivid 3D object instead of a "hollow image." In addition, there is no need to do a fast Fourier transform for each 3D element every time. A ferroelectric liquid crystal spatial light modulator is used to display the binary hologram within our experiment and the hologram of a base right triangle is produced by utilizing just a one-step Fourier transform in the 2D case, which can be expanded to the 3D case by multiplying by a suitable Fresnel phase plane. All 3D holograms generated in this paper are based on Fresnel propagation; thus, the Fresnel plane is treated as a vital element in producing the hologram. A GeForce GTX 770 graphics card with 2 GB memory is used to achieve parallel computing.

  7. Visual Turing test for computer vision systems

    PubMed Central

    Geman, Donald; Geman, Stuart; Hallonquist, Neil; Younes, Laurent

    2015-01-01

    Today, computer vision systems are tested by their accuracy in detecting and localizing instances of objects. As an alternative, and motivated by the ability of humans to provide far richer descriptions and even tell a story about an image, we construct a “visual Turing test”: an operator-assisted device that produces a stochastic sequence of binary questions from a given test image. The query engine proposes a question; the operator either provides the correct answer or rejects the question as ambiguous; the engine proposes the next question (“just-in-time truthing”). The test is then administered to the computer-vision system, one question at a time. After the system’s answer is recorded, the system is provided the correct answer and the next question. Parsing is trivial and deterministic; the system being tested requires no natural language processing. The query engine employs statistical constraints, learned from a training set, to produce questions with essentially unpredictable answers—the answer to a question, given the history of questions and their correct answers, is nearly equally likely to be positive or negative. In this sense, the test is only about vision. The system is designed to produce streams of questions that follow natural story lines, from the instantiation of a unique object, through an exploration of its properties, and on to its relationships with other uniquely instantiated objects. PMID:25755262

  8. Beam damage detection using computer vision technology

    NASA Astrophysics Data System (ADS)

    Shi, Jing; Xu, Xiangjun; Wang, Jialai; Li, Gong

    2010-09-01

    In this paper, a new approach for efficient damage detection in engineering structures is introduced. The key concept is to use the mature computer vision technology to capture the static deformation profile of a structure, and then employ profile analysis methods to detect the locations of the damages. By combining with wireless communication techniques, the proposed approach can provide an effective and economical solution for remote monitoring of structure health. Moreover, a preliminary experiment is conducted to verify the proposed concept. A commercial computer vision camera is used to capture the static deformation profiles of cracked cantilever beams under loading. The profiles are then processed to reveal the existence and location of the irregularities on the deformation profiles by applying fractal dimension, wavelet transform and roughness methods, respectively. The proposed concept is validated on both one-crack and two-crack cantilever beam-type specimens. It is also shown that all three methods can produce satisfactory results based on the profiles provided by the vision camera. In addition, the profile quality is the determining factor for the noise level in resultant detection signal.

  9. Using the CAVE virtual-reality environment as an aid to 3-D electromagnetic field computation

    SciTech Connect

    Turner, L.R.; Levine, D.; Huang, M.; Papka, M; Kettunen, L.

    1995-08-01

    One of the major problems in three-dimensional (3-D) field computation is visualizing the resulting 3-D field distributions. A virtual-reality environment, such as the CAVE, (CAVE Automatic Virtual Environment) is helping to overcome this problem, thus making the results of computation more usable for designers and users of magnets and other electromagnetic devices. As a demonstration of the capabilities of the CAVE, the elliptical multipole wiggler (EMW), an insertion device being designed for the Advanced Photon Source (APS) now being commissioned at Argonne National Laboratory (ANL), wa made visible, along with its fields and beam orbits. Other uses of the CAVE in preprocessing and postprocessing computation for electromagnetic applications are also discussed.

  10. Synesthetic art through 3-D projection: The requirements of a computer-based supermedium

    NASA Technical Reports Server (NTRS)

    Mallary, Robert

    1989-01-01

    A computer-based form of multimedia art is proposed that uses the computer to fuse aspects of painting, sculpture, dance, music, film, and other media into a one-to-one synthesia of image and sound for spatially synchronous 3-D projection. Called synesthetic art, this conversion of many varied media into an aesthetically unitary experience determines the character and requirements of the system and its software. During the start-up phase, computer stereographic systems are unsuitable for software development. Eventually, a new type of illusory-projective supermedium will be required to achieve the needed combination of large-format projection and convincing real life presence, and to handle the vast amount of 3-D visual and acoustic information required. The influence of the concept on the author's research and creative work is illustrated through two examples.

  11. Computational study of 3-D hot-spot initiation in shocked insensitive high-explosive

    NASA Astrophysics Data System (ADS)

    Najjar, F. M.; Howard, W. M.; Fried, L. E.; Manaa, M. R.; Nichols, A., III; Levesque, G.

    2012-03-01

    High-explosive (HE) material consists of large-sized grains with micron-sized embedded impurities and pores. Under various mechanical/thermal insults, these pores collapse generating hightemperature regions leading to ignition. A hydrodynamic study has been performed to investigate the mechanisms of pore collapse and hot spot initiation in TATB crystals, employing a multiphysics code, ALE3D, coupled to the chemistry module, Cheetah. This computational study includes reactive dynamics. Two-dimensional high-resolution large-scale meso-scale simulations have been performed. The parameter space is systematically studied by considering various shock strengths, pore diameters and multiple pore configurations. Preliminary 3-D simulations are undertaken to quantify the 3-D dynamics.

  12. Application of the ASP3D Computer Program to Unsteady Aerodynamic and Aeroelastic Analyses

    NASA Technical Reports Server (NTRS)

    Batina, John T.

    2006-01-01

    A new computer program has been developed called ASP3D (Advanced Small Perturbation - 3D), which solves the small perturbation potential flow equation in an advanced form including mass-consistent surface and trailing wake boundary conditions, and entropy, vorticity, and viscous effects. The purpose of the program is for unsteady aerodynamic and aeroelastic analyses, especially in the nonlinear transonic flight regime. The program exploits the simplicity of stationary Cartesian meshes with the movement or deformation of the configuration under consideration incorporated into the solution algorithm through a planar surface boundary condition. The paper presents unsteady aerodynamic and aeroelastic applications of ASP3D to assess the time dependent capability and demonstrate various features of the code.

  13. Organ printing: computer-aided jet-based 3D tissue engineering.

    PubMed

    Mironov, Vladimir; Boland, Thomas; Trusk, Thomas; Forgacs, Gabor; Markwald, Roger R

    2003-04-01

    Tissue engineering technology promises to solve the organ transplantation crisis. However, assembly of vascularized 3D soft organs remains a big challenge. Organ printing, which we define as computer-aided, jet-based 3D tissue-engineering of living human organs, offers a possible solution. Organ printing involves three sequential steps: pre-processing or development of "blueprints" for organs; processing or actual organ printing; and postprocessing or organ conditioning and accelerated organ maturation. A cell printer that can print gels, single cells and cell aggregates has been developed. Layer-by-layer sequentially placed and solidified thin layers of a thermo-reversible gel could serve as "printing paper". Combination of an engineering approach with the developmental biology concept of embryonic tissue fluidity enables the creation of a new rapid prototyping 3D organ printing technology, which will dramatically accelerate and optimize tissue and organ assembly.

  14. Computer-Vision-Assisted Palm Rehabilitation With Supervised Learning.

    PubMed

    Vamsikrishna, K M; Dogra, Debi Prosad; Desarkar, Maunendra Sankar

    2016-05-01

    Physical rehabilitation supported by the computer-assisted-interface is gaining popularity among health-care fraternity. In this paper, we have proposed a computer-vision-assisted contactless methodology to facilitate palm and finger rehabilitation. Leap motion controller has been interfaced with a computing device to record parameters describing 3-D movements of the palm of a user undergoing rehabilitation. We have proposed an interface using Unity3D development platform. Our interface is capable of analyzing intermediate steps of rehabilitation without the help of an expert, and it can provide online feedback to the user. Isolated gestures are classified using linear discriminant analysis (DA) and support vector machines (SVM). Finally, a set of discrete hidden Markov models (HMM) have been used to classify gesture sequence performed during rehabilitation. Experimental validation using a large number of samples collected from healthy volunteers reveals that DA and SVM perform similarly while applied on isolated gesture recognition. We have compared the results of HMM-based sequence classification with CRF-based techniques. Our results confirm that both HMM and CRF perform quite similarly when tested on gesture sequences. The proposed system can be used for home-based palm or finger rehabilitation in the absence of experts.

  15. An investigation of low-dose 3D scout scans for computed tomography

    NASA Astrophysics Data System (ADS)

    Gomes, Juliana; Gang, Grace J.; Mathews, Aswin; Stayman, J. Webster

    2017-03-01

    Purpose: Commonly 2D scouts or topograms are used prior to CT scan acquisition. However, low-dose 3D scouts could potentially provide additional information for more effective patient positioning and selection of acquisition protocols. We propose using model-based iterative reconstruction to reconstruct low exposure tomographic data to maintain image quality in both low-dose 3D scouts and reprojected topograms based on those 3D scouts. Methods: We performed tomographic acquisitions on a CBCT test-bench using a range of exposure settings from 16.6 to 231.9 total mAs. Both an anthropomorphic phantom and a 32 cm CTDI phantom were scanned. The penalized-likelihood reconstructions were made using Matlab and CUDA libraries and reconstruction parameters were tuned to determine the best regularization strength and delta parameter. RMS error between reconstructions and the highest exposure reconstruction were computed, and CTDIW values were reported for each exposure setting. RMS error for reprojected topograms were also computed. Results: We find that we are able to produce low-dose (0.417 mGy) 3D scouts that show high-contrast and large anatomical features while maintaining the ability to produce traditional topograms. Conclusions: We demonstrated that iterative reconstruction can mitigate noise in very low exposure CT acquisitions to enable 3D CT scout. Such additional 3D information may lead to improved protocols for patient positioning and acquisition refinements as well as a number of advanced dose reduction strategies that require localization of anatomical features and quantities that are not provided by simple 2D topograms.

  16. Computer Vision Techniques for Transcatheter Intervention

    PubMed Central

    Zhao, Feng; Roach, Matthew

    2015-01-01

    Minimally invasive transcatheter technologies have demonstrated substantial promise for the diagnosis and the treatment of cardiovascular diseases. For example, transcatheter aortic valve implantation is an alternative to aortic valve replacement for the treatment of severe aortic stenosis, and transcatheter atrial fibrillation ablation is widely used for the treatment and the cure of atrial fibrillation. In addition, catheter-based intravascular ultrasound and optical coherence tomography imaging of coronary arteries provides important information about the coronary lumen, wall, and plaque characteristics. Qualitative and quantitative analysis of these cross-sectional image data will be beneficial to the evaluation and the treatment of coronary artery diseases such as atherosclerosis. In all the phases (preoperative, intraoperative, and postoperative) during the transcatheter intervention procedure, computer vision techniques (e.g., image segmentation and motion tracking) have been largely applied in the field to accomplish tasks like annulus measurement, valve selection, catheter placement control, and vessel centerline extraction. This provides beneficial guidance for the clinicians in surgical planning, disease diagnosis, and treatment assessment. In this paper, we present a systematical review on these state-of-the-art methods. We aim to give a comprehensive overview for researchers in the area of computer vision on the subject of transcatheter intervention. Research in medical computing is multi-disciplinary due to its nature, and hence, it is important to understand the application domain, clinical background, and imaging modality, so that methods and quantitative measurements derived from analyzing the imaging data are appropriate and meaningful. We thus provide an overview on the background information of the transcatheter intervention procedures, as well as a review of the computer vision techniques and methodologies applied in this area. PMID:27170893

  17. 3D-Printed Tissue-Mimicking Phantoms for Medical Imaging and Computational Validation Applications

    PubMed Central

    Shahmirzadi, Danial; Li, Ronny X.; Doyle, Barry J.; Konofagou, Elisa E.; McGloughlin, Tim M.

    2014-01-01

    Abstract Abdominal aortic aneurysm (AAA) is a permanent, irreversible dilation of the distal region of the aorta. Recent efforts have focused on improved AAA screening and biomechanics-based failure prediction. Idealized and patient-specific AAA phantoms are often employed to validate numerical models and imaging modalities. To produce such phantoms, the investment casting process is frequently used, reconstructing the 3D vessel geometry from computed tomography patient scans. In this study the alternative use of 3D printing to produce phantoms is investigated. The mechanical properties of flexible 3D-printed materials are benchmarked against proven elastomers. We demonstrate the utility of this process with particular application to the emerging imaging modality of ultrasound-based pulse wave imaging, a noninvasive diagnostic methodology being developed to obtain regional vascular wall stiffness properties, differentiating normal and pathologic tissue in vivo. Phantom wall displacements under pulsatile loading conditions were observed, showing good correlation to fluid–structure interaction simulations and regions of peak wall stress predicted by finite element analysis. 3D-printed phantoms show a strong potential to improve medical imaging and computational analysis, potentially helping bridge the gap between experimental and clinical diagnostic tools. PMID:28804733

  18. Effectiveness of Generalized Aurora Computed Tomography for the EISCAT_3D project

    NASA Astrophysics Data System (ADS)

    Tanaka, Y.; Ogawa, Y.; Kadokura, A.; Aso, T.; Ueno, G.; Saita, S.; Gustavsson, B.; Brandstrom, U.

    2013-12-01

    Aurora Computed Tomography (ACT) is a technique to reconstruct three-dimensional (3-D) distribution of auroral luminosity from a number of monochromatic images taken simultaneously by multi observation points. We have developed a more generalized ACT (hereinafter referred to as G-ACT), which is capable of retrieving energy and spatial distributions of auroral precipitating electrons from multi-instrument data, such as ionospheric electron density from the EISCAT radar, cosmic noise absorption (CNA) from imaging riometer, as well as the auroral images. On the other hand, next-generation incoherent scatter radar, EISCAT_3D, which will be a new multiple site phased-array radar, is planned to replace the existing EISCAT radars in the near future. The EISCAT_3D radar will be able to measure the 3-D ionospheric plasma parameters such as electron density and vector ion drift velocity at ten-times higher temporal and spatial resolution than the present radars and thus is expected to provide new insights into auroral physics. Detailed information of the EISCAT_3D project is described in the web page http://www.eiscat3d.se. The 3-D data measured with the EISCAT_3D radar will be a most interesting target for the application of the G-ACT method. In order to examine how effective G-ACT will be for the EISCAT_3D project, we have conducted numerical simulations. It was assumed for this simulation that (1) monochromatic imagers at ALIS (Aurora Large Imaging System) stations were directed to the ionospheric region over Skibotn (69.35N, 20.37E), Norway, (2) the EISCAT_3D radar was installed at Skibotn and observed the volume from 68.6 to 69.4N latitude and 18.8 to 21.8E longitude with multiple beams, and (3) two neighboring discrete arcs appeared over Skibotn. We first obtained data observed with the ALIS imagers and the EISCAT_3D radar by solving the forward problem and then applied the G-ACT method to these data. It was demonstrated that even if the spatial distribution of the

  19. Interactive virtual simulation using a 3D computer graphics model for microvascular decompression surgery.

    PubMed

    Oishi, Makoto; Fukuda, Masafumi; Hiraishi, Tetsuya; Yajima, Naoki; Sato, Yosuke; Fujii, Yukihiko

    2012-09-01

    The purpose of this paper is to report on the authors' advanced presurgical interactive virtual simulation technique using a 3D computer graphics model for microvascular decompression (MVD) surgery. The authors performed interactive virtual simulation prior to surgery in 26 patients with trigeminal neuralgia or hemifacial spasm. The 3D computer graphics models for interactive virtual simulation were composed of the brainstem, cerebellum, cranial nerves, vessels, and skull individually created by the image analysis, including segmentation, surface rendering, and data fusion for data collected by 3-T MRI and 64-row multidetector CT systems. Interactive virtual simulation was performed by employing novel computer-aided design software with manipulation of a haptic device to imitate the surgical procedures of bone drilling and retraction of the cerebellum. The findings were compared with intraoperative findings. In all patients, interactive virtual simulation provided detailed and realistic surgical perspectives, of sufficient quality, representing the lateral suboccipital route. The causes of trigeminal neuralgia or hemifacial spasm determined by observing 3D computer graphics models were concordant with those identified intraoperatively in 25 (96%) of 26 patients, which was a significantly higher rate than the 73% concordance rate (concordance in 19 of 26 patients) obtained by review of 2D images only (p < 0.05). Surgeons evaluated interactive virtual simulation as having "prominent" utility for carrying out the entire surgical procedure in 50% of cases. It was evaluated as moderately useful or "supportive" in the other 50% of cases. There were no cases in which it was evaluated as having no utility. The utilities of interactive virtual simulation were associated with atypical or complex forms of neurovascular compression and structural restrictions in the surgical window. Finally, MVD procedures were performed as simulated in 23 (88%) of the 26 patients . Our

  20. User's guide to the NOZL3D and NOZLIC computer programs

    NASA Technical Reports Server (NTRS)

    Thomas, P. D.

    1980-01-01

    Complete FORTRAN listings and running instructions are given for a set of computer programs that perform an implicit numerical solution to the unsteady Navier-Stokes equations to predict the flow characteristics and performance of nonaxisymmetric nozzles. The set includes the NOZL3D program, which performs the flow computations; the NOZLIC program, which sets up the flow field initial conditions for general nozzle configurations, and also generates the computational grid for simple two dimensional and axisymmetric configurations; and the RGRIDD program, which generates the computational grid for complicated three dimensional configurations. The programs are designed specifically for the NASA-Langley CYBER 175 computer, and employ auxiliary disk files for primary data storage. Input instructions and computed results are given for four test cases that include two dimensional, three dimensional, and axisymmetric configurations.

  1. Integration of 3D anatomical data obtained by CT imaging and 3D optical scanning for computer aided implant surgery

    PubMed Central

    2011-01-01

    Background A precise placement of dental implants is a crucial step to optimize both prosthetic aspects and functional constraints. In this context, the use of virtual guiding systems has been recognized as a fundamental tool to control the ideal implant position. In particular, complex periodontal surgeries can be performed using preoperative planning based on CT data. The critical point of the procedure relies on the lack of accuracy in transferring CT planning information to surgical field through custom-made stereo-lithographic surgical guides. Methods In this work, a novel methodology is proposed for monitoring loss of accuracy in transferring CT dental information into periodontal surgical field. The methodology is based on integrating 3D data of anatomical (impression and cast) and preoperative (radiographic template) models, obtained by both CT and optical scanning processes. Results A clinical case, relative to a fully edentulous jaw patient, has been used as test case to assess the accuracy of the various steps concurring in manufacturing surgical guides. In particular, a surgical guide has been designed to place implants in the bone structure of the patient. The analysis of the results has allowed the clinician to monitor all the errors, which have been occurring step by step manufacturing the physical templates. Conclusions The use of an optical scanner, which has a higher resolution and accuracy than CT scanning, has demonstrated to be a valid support to control the precision of the various physical models adopted and to point out possible error sources. A case study regarding a fully edentulous patient has confirmed the feasibility of the proposed methodology. PMID:21338504

  2. The computer simulation of 3d gas dynamics in a gas centrifuge

    NASA Astrophysics Data System (ADS)

    Borman, V. D.; Bogovalov, S. V.; Borisevich, V. D.; Tronin, I. V.; Tronin, V. N.

    2016-09-01

    We argue on the basis of the results of 2D analysis of the gas flow in gas centrifuges that a reliable calculation of the circulation of the gas and gas content in the gas centrifuge is possible only in frameworks of 3D numerical simulation of gas dynamics in the gas centrifuge (hereafter GC). The group from National research nuclear university, MEPhI, has created a computer code for 3D simulation of the gas flow in GC. The results of the computer simulations of the gas flows in GC are presented. A model Iguassu centrifuge is explored for the simulations. A nonaxisymmetric gas flow is produced due to interaction of the hypersonic rotating flow with the scoops for extraction of the product and waste flows from the GC. The scoops produce shock waves penetrating into a working camera of the GC and form spiral waves there.

  3. SALE-3D: a simplified ALE computer program for calculating three-dimensional fluid flow

    SciTech Connect

    Amsden, A.A.; Ruppel, H.M.

    1981-11-01

    This report presents a simplified numerical fluid-dynamics computing technique for calculating time-dependent flows in three dimensions. An implicit treatment of the pressure equation permits calculation of flows far subsonic without stringent constraints on the time step. In addition, the grid vertices may be moved with the fluid in Lagrangian fashion or held fixed in an Eulerian manner, or moved in some prescribed manner to give a continuous rezoning capability. This report describes the combination of Implicit Continuous-fluid Eulerian (ICE) and Arbitrary Lagrangian-Eulerian (ALE) to form the ICEd-ALE technique in the framework of the Simplified-ALE (SALE-3D) computer program, for which a general flow diagram and complete FORTRAN listing are included. Sample problems show how to modify the code for a variety of applications. SALE-3D is patterned as closely as possible on the previously reported two-dimensional SALE program.

  4. Gust Acoustics Computation with a Space-Time CE/SE Parallel 3D Solver

    NASA Technical Reports Server (NTRS)

    Wang, X. Y.; Himansu, A.; Chang, S. C.; Jorgenson, P. C. E.; Reddy, D. R. (Technical Monitor)

    2002-01-01

    The benchmark Problem 2 in Category 3 of the Third Computational Aero-Acoustics (CAA) Workshop is solved using the space-time conservation element and solution element (CE/SE) method. This problem concerns the unsteady response of an isolated finite-span swept flat-plate airfoil bounded by two parallel walls to an incident gust. The acoustic field generated by the interaction of the gust with the flat-plate airfoil is computed by solving the 3D (three-dimensional) Euler equations in the time domain using a parallel version of a 3D CE/SE solver. The effect of the gust orientation on the far-field directivity is studied. Numerical solutions are presented and compared with analytical solutions, showing a reasonable agreement.

  5. Colour vision and computer-generated images

    NASA Astrophysics Data System (ADS)

    Ramek, Michael

    2010-06-01

    Colour vision deficiencies affect approximately 8% of the male and approximately 0.4% of the female population. In this work, it is demonstrated that computer generated images oftentimes pose unnecessary problems for colour deficient viewers. Three examples, the visualization of molecular structures, graphs of mathematical functions, and colour coded images from numerical data are used to identify problematic colour combinations: red/black, green/black, red/yellow, yellow/white, fuchsia/white, and aqua/white. Alternatives for these combinations are discussed.

  6. Three computer vision applications in dentistry

    NASA Astrophysics Data System (ADS)

    Dostalova, Tatjana; Hlavac, Vaclav; Pajdla, T.; Sara, Radim; Smutny, Vladimir

    1994-05-01

    This paper summarizes three recent applications of computer vision techniques in dentistry developed at the Czech Technical University. The first one uses a special optical instrument to capture the image of the tooth arc directly in the patient's mouth. The captured images are used for visualization of teeth position changes during treatment. The second application allows the use of images for checking teeth occlusal contacts and their abrasion. The third application uses photometric measurements to study the resistance of the dental material against microbial growth.

  7. Computer vision for microscopy diagnosis of malaria.

    PubMed

    Tek, F Boray; Dempster, Andrew G; Kale, Izzet

    2009-07-13

    This paper reviews computer vision and image analysis studies aiming at automated diagnosis or screening of malaria infection in microscope images of thin blood film smears. Existing works interpret the diagnosis problem differently or propose partial solutions to the problem. A critique of these works is furnished. In addition, a general pattern recognition framework to perform diagnosis, which includes image acquisition, pre-processing, segmentation, and pattern classification components, is described. The open problems are addressed and a perspective of the future work for realization of automated microscopy diagnosis of malaria is provided.

  8. A shape representation for computer vision based on differential topology.

    PubMed

    Blicher, A P

    1995-01-01

    We describe a shape representation for use in computer vision, after a brief review of shape representation and object recognition in general. Our shape representation is based on graph structures derived from level sets whose characteristics are understood from differential topology, particularly singularity theory. This leads to a representation which is both stable and whose changes under deformation are simple. The latter allows smoothing in the representation domain ('symbolic smoothing'), which in turn can be used for coarse-to-fine strategies, or as a discrete analog of scale space. Essentially the same representation applies to an object embedded in 3-dimensional space as to one in the plane, and likewise for a 3D object and its silhouette. We suggest how this can be used for recognition.

  9. Computed Tomography and its Application for the 3D Characterization of Coarse Grained Meteorites

    NASA Technical Reports Server (NTRS)

    Gillies, Donald C.; Engel, H. P.; Carpenter, P. K.

    2004-01-01

    With judicious selection of parameters, computed tomography can provide high precision density data. Such data can lead to a non-destructive determination of the phases and phase distribution within large solid objects. Of particular interest is the structure of the Mundrabilla meteorite, which has 25 volumes, percent of a sulfide within a metallic meteorite. 3D digital imaging has enabled a quantitative evaluation of the distribution and contiguity of the phases to be determined.

  10. Computed Tomography and its Application for the 3D Characterization of Coarse Grained Meteorites

    NASA Technical Reports Server (NTRS)

    Gillies, Donald C.; Engel, H. P.; Carpenter, P. K.

    2004-01-01

    With judicious selection of parameters, computed tomography can provide high precision density data. Such data can lead to a non-destructive determination of the phases and phase distribution within large solid objects. Of particular interest is the structure of the Mundrabilla meteorite, which has 25 volumes, percent of a sulfide within a metallic meteorite. 3D digital imaging has enabled a quantitative evaluation of the distribution and contiguity of the phases to be determined.

  11. Validation of computational code UST3D by the example of experimental aerodynamic data

    NASA Astrophysics Data System (ADS)

    Surzhikov, S. T.

    2017-02-01

    Numerical simulation of the aerodynamic characteristics of the hypersonic vehicles X-33 and X-34 as well as spherically blunted cone is performed using the unstructured meshes. It is demonstrated that the numerical predictions obtained with the computational code UST3D are in acceptable agreement with the experimental data for approximate parameters of the geometry of the hypersonic vehicles and in excellent agreement with data for blunted cone.

  12. Effectiveness Evaluation of Force Protection Training Using Computer-Based Instruction and X3d Simulation

    DTIC Science & Technology

    2007-09-01

    to growing operational constraints accelerated by the Global War on Terror, the United States Navy is looking for alternative methods of training to...accomplished efficiently and effectively, saving the U.S. Navy time and resources while maintaining a high state of readiness. The goal of this thesis is...COMPUTER-BASED INSTRUCTION AND X3D SIMULATION Wilfredo Cruzbaez Lieutenant, United States Navy B.A., Norfolk State University, 2001 Submitted in

  13. Comparison of traditional methods with 3D computer models in the instruction of hepatobiliary anatomy.

    PubMed

    Keedy, Alexander W; Durack, Jeremy C; Sandhu, Parmbir; Chen, Eric M; O'Sullivan, Patricia S; Breiman, Richard S

    2011-01-01

    This study was designed to determine whether an interactive three-dimensional presentation depicting liver and biliary anatomy is more effective for teaching medical students than a traditional textbook format presentation of the same material. Forty-six medical students volunteered for participation in this study. Baseline demographic information, spatial ability, and knowledge of relevant anatomy were measured. Participants were randomized into two groups and presented with a computer-based interactive learning module comprised of animations and still images to highlight various anatomical structures (3D group), or a computer-based text document containing the same images and text without animation or interactive features (2D group). Following each teaching module, students completed a satisfaction survey and nine-item anatomic knowledge post-test. The 3D group scored higher on the post-test than the 2D group, with a mean score of 74% and 64%, respectively; however, when baseline differences in pretest scores were accounted for, this difference was not statistically significant (P = 0.33). Spatial ability did not statistically significantly correlate with post-test scores for the 3D group or the 2D group. In the post-test satisfaction survey the 3D group expressed a statistically significantly higher overall satisfaction rating compared to students in the 2D control group (4.5 versus 3.7 out of 5, P = 0.02). While the interactive 3D multimedia module received higher satisfaction ratings from students, it neither enhanced nor inhibited learning of complex hepatobiliary anatomy compared to an informationally equivalent traditional textbook style approach. . Copyright © 2011 American Association of Anatomists.

  14. Development of a 3D parallel mechanism robot arm with three vertical-axial pneumatic actuators combined with a stereo vision system.

    PubMed

    Chiang, Mao-Hsiung; Lin, Hao-Ting

    2011-01-01

    This study aimed to develop a novel 3D parallel mechanism robot driven by three vertical-axial pneumatic actuators with a stereo vision system for path tracking control. The mechanical system and the control system are the primary novel parts for developing a 3D parallel mechanism robot. In the mechanical system, a 3D parallel mechanism robot contains three serial chains, a fixed base, a movable platform and a pneumatic servo system. The parallel mechanism are designed and analyzed first for realizing a 3D motion in the X-Y-Z coordinate system of the robot's end-effector. The inverse kinematics and the forward kinematics of the parallel mechanism robot are investigated by using the Denavit-Hartenberg notation (D-H notation) coordinate system. The pneumatic actuators in the three vertical motion axes are modeled. In the control system, the Fourier series-based adaptive sliding-mode controller with H(∞) tracking performance is used to design the path tracking controllers of the three vertical servo pneumatic actuators for realizing 3D path tracking control of the end-effector. Three optical linear scales are used to measure the position of the three pneumatic actuators. The 3D position of the end-effector is then calculated from the measuring position of the three pneumatic actuators by means of the kinematics. However, the calculated 3D position of the end-effector cannot consider the manufacturing and assembly tolerance of the joints and the parallel mechanism so that errors between the actual position and the calculated 3D position of the end-effector exist. In order to improve this situation, sensor collaboration is developed in this paper. A stereo vision system is used to collaborate with the three position sensors of the pneumatic actuators. The stereo vision system combining two CCD serves to measure the actual 3D position of the end-effector and calibrate the error between the actual and the calculated 3D position of the end-effector. Furthermore, to

  15. Development of a 3D Parallel Mechanism Robot Arm with Three Vertical-Axial Pneumatic Actuators Combined with a Stereo Vision System

    PubMed Central

    Chiang, Mao-Hsiung; Lin, Hao-Ting

    2011-01-01

    This study aimed to develop a novel 3D parallel mechanism robot driven by three vertical-axial pneumatic actuators with a stereo vision system for path tracking control. The mechanical system and the control system are the primary novel parts for developing a 3D parallel mechanism robot. In the mechanical system, a 3D parallel mechanism robot contains three serial chains, a fixed base, a movable platform and a pneumatic servo system. The parallel mechanism are designed and analyzed first for realizing a 3D motion in the X-Y-Z coordinate system of the robot’s end-effector. The inverse kinematics and the forward kinematics of the parallel mechanism robot are investigated by using the Denavit-Hartenberg notation (D-H notation) coordinate system. The pneumatic actuators in the three vertical motion axes are modeled. In the control system, the Fourier series-based adaptive sliding-mode controller with H∞ tracking performance is used to design the path tracking controllers of the three vertical servo pneumatic actuators for realizing 3D path tracking control of the end-effector. Three optical linear scales are used to measure the position of the three pneumatic actuators. The 3D position of the end-effector is then calculated from the measuring position of the three pneumatic actuators by means of the kinematics. However, the calculated 3D position of the end-effector cannot consider the manufacturing and assembly tolerance of the joints and the parallel mechanism so that errors between the actual position and the calculated 3D position of the end-effector exist. In order to improve this situation, sensor collaboration is developed in this paper. A stereo vision system is used to collaborate with the three position sensors of the pneumatic actuators. The stereo vision system combining two CCD serves to measure the actual 3D position of the end-effector and calibrate the error between the actual and the calculated 3D position of the end-effector. Furthermore, to

  16. Improving accuracy and computation time of 3D reconstruction through an improved carving procedure

    NASA Astrophysics Data System (ADS)

    Ruiz, Diego; Macq, Benoit

    2005-01-01

    A growing number of mixed reality applications have to build 3D models of arbitrary shapes. However, modeling of an arbitrary shape implies a trade-off between accuracy and computation time. Real-time methods based on the visual hull cannot model the holes of the shape inside the approximated silhouette. Carving methods can but they are not real time. The aim of this paper is to improve their accuracy and computation time. It presents a novel multiresolution algorithm for 3D reconstruction of arbitrary 3D shapes from range data acquired at fixed viewpoints. The algorithm is split into two parts. The first part labels a voxel thanks to the current viewpoint and without taking into account previous labels. The second part updates the labels and grows the octree representing the voxelized space. It determines the number of calls made to the first part, which is time consuming. A novel set of labels, the study of the parallelepiped projections and a front to back propagation of information allow us to improve accuracy in both parts, to reduce the computation cost of the voxel labeling part and to reduce the number of calls made to it by the mutiresolution and voxel updating part.

  17. Improving accuracy and computation time of 3D reconstruction through an improved carving procedure

    NASA Astrophysics Data System (ADS)

    Ruiz, Diego; Macq, Benoît

    2004-12-01

    A growing number of mixed reality applications have to build 3D models of arbitrary shapes. However, modeling of an arbitrary shape implies a trade-off between accuracy and computation time. Real-time methods based on the visual hull cannot model the holes of the shape inside the approximated silhouette. Carving methods can but they are not real time. The aim of this paper is to improve their accuracy and computation time. It presents a novel multiresolution algorithm for 3D reconstruction of arbitrary 3D shapes from range data acquired at fixed viewpoints. The algorithm is split into two parts. The first part labels a voxel thanks to the current viewpoint and without taking into account previous labels. The second part updates the labels and grows the octree representing the voxelized space. It determines the number of calls made to the first part, which is time consuming. A novel set of labels, the study of the parallelepiped projections and a front to back propagation of information allow us to improve accuracy in both parts, to reduce the computation cost of the voxel labeling part and to reduce the number of calls made to it by the mutiresolution and voxel updating part.

  18. 3D fast adaptive correlation imaging for large-scale gravity data based on GPU computation

    NASA Astrophysics Data System (ADS)

    Chen, Z.; Meng, X.; Guo, L.; Liu, G.

    2011-12-01

    In recent years, large scale gravity data sets have been collected and employed to enhance gravity problem-solving abilities of tectonics studies in China. Aiming at the large scale data and the requirement of rapid interpretation, previous authors have carried out a lot of work, including the fast gradient module inversion and Euler deconvolution depth inversion ,3-D physical property inversion using stochastic subspaces and equivalent storage, fast inversion using wavelet transforms and a logarithmic barrier method. So it can be say that 3-D gravity inversion has been greatly improved in the last decade. Many authors added many different kinds of priori information and constraints to deal with nonuniqueness using models composed of a large number of contiguous cells of unknown property and obtained good results. However, due to long computation time, instability and other shortcomings, 3-D physical property inversion has not been widely applied to large-scale data yet. In order to achieve 3-D interpretation with high efficiency and precision for geological and ore bodies and obtain their subsurface distribution, there is an urgent need to find a fast and efficient inversion method for large scale gravity data. As an entirely new geophysical inversion method, 3D correlation has a rapid development thanks to the advantage of requiring no a priori information and demanding small amount of computer memory. This method was proposed to image the distribution of equivalent excess masses of anomalous geological bodies with high resolution both longitudinally and transversely. In order to tranform the equivalence excess masses into real density contrasts, we adopt the adaptive correlation imaging for gravity data. After each 3D correlation imaging, we change the equivalence into density contrasts according to the linear relationship, and then carry out forward gravity calculation for each rectangle cells. Next, we compare the forward gravity data with real data, and

  19. Computer assisted 3D pre-operative planning tool for femur fracture orthopedic surgery

    NASA Astrophysics Data System (ADS)

    Gamage, Pavan; Xie, Sheng Quan; Delmas, Patrice; Xu, Wei Liang

    2010-02-01

    Femur shaft fractures are caused by high impact injuries and can affect gait functionality if not treated correctly. Until recently, the pre-operative planning for femur fractures has relied on two-dimensional (2D) radiographs, light boxes, tracing paper, and transparent bone templates. The recent availability of digital radiographic equipment has to some extent improved the workflow for preoperative planning. Nevertheless, imaging is still in 2D X-rays and planning/simulation tools to support fragment manipulation and implant selection are still not available. Direct three-dimensional (3D) imaging modalities such as Computed Tomography (CT) are also still restricted to a minority of complex orthopedic procedures. This paper proposes a software tool which allows orthopedic surgeons to visualize, diagnose, plan and simulate femur shaft fracture reduction procedures in 3D. The tool utilizes frontal and lateral 2D radiographs to model the fracture surface, separate a generic bone into the two fractured fragments, identify the pose of each fragment, and automatically customize the shape of the bone. The use of 3D imaging allows full spatial inspection of the fracture providing different views through the manipulation of the interactively reconstructed 3D model, and ultimately better pre-operative planning.

  20. 3D animation of facial plastic surgery based on computer graphics

    NASA Astrophysics Data System (ADS)

    Zhang, Zonghua; Zhao, Yan

    2013-12-01

    More and more people, especial women, are getting desired to be more beautiful than ever. To some extent, it becomes true because the plastic surgery of face was capable in the early 20th and even earlier as doctors just dealing with war injures of face. However, the effect of post-operation is not always satisfying since no animation could be seen by the patients beforehand. In this paper, by combining plastic surgery of face and computer graphics, a novel method of simulated appearance of post-operation will be given to demonstrate the modified face from different viewpoints. The 3D human face data are obtained by using 3D fringe pattern imaging systems and CT imaging systems and then converted into STL (STereo Lithography) file format. STL file is made up of small 3D triangular primitives. The triangular mesh can be reconstructed by using hash function. Top triangular meshes in depth out of numbers of triangles must be picked up by ray-casting technique. Mesh deformation is based on the front triangular mesh in the process of simulation, which deforms interest area instead of control points. Experiments on face model show that the proposed 3D animation facial plastic surgery can effectively demonstrate the simulated appearance of post-operation.

  1. A 3D learning playground for potential attention training in ADHD: A brain computer interface approach.

    PubMed

    Ali, Abdulla; Puthusserypady, Sadasivan

    2015-01-01

    This paper presents a novel brain-computer-interface (BCI) system that could potentially be used for enhancing the attention ability of subjects with attention deficit hyperactivity disorder (ADHD). It employs the steady state visual evoked potential (SSVEP) paradigm. The developed system consists of a 3D classroom environment with active 3D distractions and 2D games executed on the blackboard. The system is concealed as a game (with stages of varying difficulty) with an underlying story to motivate the subjects. It was tested on eleven healthy subjects and the results undeniably establish that by moving to a higher stage in the game where the 2D environment is changed to 3D along with the added 3D distractions, the difficulty level in keeping attention on the main task increases for the subjects. Results also show a mean accuracy of 92.26 ± 7.97% and a mean average selection time of 3.07 ± 1.09 seconds.

  2. A User-Developed 3-D Hand Gesture Set for Human-Computer Interaction.

    PubMed

    Pereira, Anna; Wachs, Juan P; Park, Kunwoo; Rempel, David

    2015-06-01

    The purpose of this study was to develop a lexicon for 3-D hand gestures for common human-computer interaction (HCI) tasks by considering usability and effort ratings. Recent technologies create an opportunity for developing a free-form 3-D hand gesture lexicon for HCI. Subjects (N = 30) with prior experience using 2-D gestures on touch screens performed 3-D gestures of their choice for 34 common HCI tasks and rated their gestures on preference, match, ease, and effort. Videos of the 1,300 generated gestures were analyzed for gesture popularity, order, and response times. Gesture hand postures were rated by the authors on biomechanical risk and fatigue. A final task gesture set is proposed based primarily on subjective ratings and hand posture risk. The different dimensions used for evaluating task gestures were not highly correlated and, therefore, measured different properties of the task-gesture match. A method is proposed for generating a user-developed 3-D gesture lexicon for common HCIs that involves subjective ratings and a posture risk rating for minimizing arm and hand fatigue. © 2014, Human Factors and Ergonomics Society.

  3. Computer-aided planning and reconstruction of cranial 3D implants.

    PubMed

    Gall, Markus; Xing Li; Xiaojun Chen; Schmalstieg, Dieter; Egger, Jan

    2016-08-01

    In this contribution, a prototype for semiautomatic computer-aided planning and reconstruction of cranial 3D Implants is presented. The software prototype guides the user through the workflow, beginning with loading and mirroring the patient's head to obtain an initial curvature of the cranial implant. However, naïve mirroring is not sufficient for an implant, because human heads are in general too asymmetric. Thus, the user can perform Laplacian smoothing, followed by Delaunay triangulation, for generating an aesthetic looking and well-fitting implant. Finally, our software prototype allows to save the designed 3D model of the implant as a STL-file for 3D printing. The 3D printed implant can be used for further pre-interventional planning or even as the final implant for the patient. In summary, our findings show that a customized MeVisLab prototype can be an alternative to complex commercial planning software, which may not be available in a clinic.

  4. [3D modeling of the female pelvis by Computer-Assisted Anatomical Dissection: Applications and perspectives].

    PubMed

    Balaya, V; Uhl, J-F; Lanore, A; Salachas, C; Samoyeau, T; Ngo, C; Bensaid, C; Cornou, C; Rossi, L; Douard, R; Bats, A-S; Lecuru, F; Delmas, V

    2016-05-01

    To achieve a 3D vectorial model of a female pelvis by Computer-Assisted Anatomical Dissection and to assess educationnal and surgical applications. From the database of "visible female" of Visible Human Project(®) (VHP) of the "national library of medicine" NLM (United States), we used 739 transverse anatomical slices of 0.33mm thickness going from L4 to the trochanters. The manual segmentation of each anatomical structures was done with Winsurf(®) software version 4.3. Each anatomical element was built as a separate vectorial object. The whole colored-rendered vectorial model with realistic textures was exported in 3Dpdf format to allow a real time interactive manipulation with Acrobat(®) pro version 11 software. Each element can be handled separately at any transparency, which allows an anatomical learning by systems: skeleton, pelvic organs, urogenital system, arterial and venous vascularization. This 3D anatomical model can be used as data bank to teach of the fundamental anatomy. This 3D vectorial model, realistic and interactive constitutes an efficient educational tool for the teaching of the anatomy of the pelvis. 3D printing of the pelvis is possible with the new printers. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  5. THERM3D -- A boundary element computer program for transient heat conduction problems

    SciTech Connect

    Ingber, M.S.

    1994-02-01

    The computer code THERM3D implements the direct boundary element method (BEM) to solve transient heat conduction problems in arbitrary three-dimensional domains. This particular implementation of the BEM avoids performing time-consuming domain integrations by approximating a ``generalized forcing function`` in the interior of the domain with the use of radial basis functions. An approximate particular solution is then constructed, and the original problem is transformed into a sequence of Laplace problems. The code is capable of handling a large variety of boundary conditions including isothermal, specified flux, convection, radiation, and combined convection and radiation conditions. The computer code is benchmarked by comparisons with analytic and finite element results.

  6. 3-D heat transfer computer calculations of the performance of the IAEA's air-bath calorimeters

    SciTech Connect

    Elias, E.; Kaizermann, S.; Perry, R.B.; Fiarman, S.

    1989-01-01

    A three dimensional (3-D) heat transfer computer code was developed to study and optimize the design parameters and to better understand the performance characteristics of the IAEA's air-bath calorimeters. The computer model accounts for heat conduction and radiation in the complex materials of the calorimeter and for heat convection and radiation at its outer surface. The temperature servo controller is modelled as an integral part of the heat balance equations in the system. The model predictions will be validated against test data using the ANL bulk calorimeter. 11 refs., 6 figs.

  7. Full 3-D OCT-based pseudophakic custom computer eye model

    PubMed Central

    Sun, M.; Pérez-Merino, P.; Martinez-Enriquez, E.; Velasco-Ocana, M.; Marcos, S.

    2016-01-01

    We compared measured wave aberrations in pseudophakic eyes implanted with aspheric intraocular lenses (IOLs) with simulated aberrations from numerical ray tracing on customized computer eye models, built using quantitative 3-D OCT-based patient-specific ocular geometry. Experimental and simulated aberrations show high correlation (R = 0.93; p<0.0001) and similarity (RMS for high order aberrations discrepancies within 23.58%). This study shows that full OCT-based pseudophakic custom computer eye models allow understanding the relative contribution of optical geometrical and surgically-related factors to image quality, and are an excellent tool for characterizing and improving cataract surgery. PMID:27231608

  8. Computation of an Underexpanded 3-D Rectangular Jet by the CE/SE Method

    NASA Technical Reports Server (NTRS)

    Loh, Ching Y.; Himansu, Ananda; Wang, Xiao Y.; Jorgenson, Philip C. E.

    2000-01-01

    Recently, an unstructured three-dimensional space-time conservation element and solution element (CE/SE) Euler solver was developed. Now it is also developed for parallel computation using METIS for domain decomposition and MPI (message passing interface). The method is employed here to numerically study the near-field of a typical 3-D rectangular under-expanded jet. For the computed case-a jet with Mach number Mj = 1.6. with a very modest grid of 1.7 million tetrahedrons, the flow features such as the shock-cell structures and the axis switching, are in good qualitative agreement with experimental results.

  9. Distributed network, wireless and cloud computing enabled 3-D ultrasound; a new medical technology paradigm.

    PubMed

    Meir, Arie; Rubinsky, Boris

    2009-11-19

    Medical technologies are indispensable to modern medicine. However, they have become exceedingly expensive and complex and are not available to the economically disadvantaged majority of the world population in underdeveloped as well as developed parts of the world. For example, according to the World Health Organization about two thirds of the world population does not have access to medical imaging. In this paper we introduce a new medical technology paradigm centered on wireless technology and cloud computing that was designed to overcome the problems of increasing health technology costs. We demonstrate the value of the concept with an example; the design of a wireless, distributed network and central (cloud) computing enabled three-dimensional (3-D) ultrasound system. Specifically, we demonstrate the feasibility of producing a 3-D high end ultrasound scan at a central computing facility using the raw data acquired at the remote patient site with an inexpensive low end ultrasound transducer designed for 2-D, through a mobile device and wireless connection link between them. Producing high-end 3D ultrasound images with simple low-end transducers reduces the cost of imaging by orders of magnitude. It also removes the requirement of having a highly trained imaging expert at the patient site, since the need for hand-eye coordination and the ability to reconstruct a 3-D mental image from 2-D scans, which is a necessity for high quality ultrasound imaging, is eliminated. This could enable relatively untrained medical workers in developing nations to administer imaging and a more accurate diagnosis, effectively saving the lives of people.

  10. Automatic procedure for realistic 3D finite element modelling of human brain for bioelectromagnetic computations

    NASA Astrophysics Data System (ADS)

    Aristovich, K. Y.; Khan, S. H.

    2010-07-01

    Realistic computer modelling of biological objects requires building of very accurate and realistic computer models based on geometric and material data, type, and accuracy of numerical analyses. This paper presents some of the automatic tools and algorithms that were used to build accurate and realistic 3D finite element (FE) model of whole-brain. These models were used to solve the forward problem in magnetic field tomography (MFT) based on Magnetoencephalography (MEG). The forward problem involves modelling and computation of magnetic fields produced by human brain during cognitive processing. The geometric parameters of the model were obtained from accurate Magnetic Resonance Imaging (MRI) data and the material properties - from those obtained from Diffusion Tensor MRI (DTMRI). The 3D FE models of the brain built using this approach has been shown to be very accurate in terms of both geometric and material properties. The model is stored on the computer in Computer-Aided Parametrical Design (CAD) format. This allows the model to be used in a wide a range of methods of analysis, such as finite element method (FEM), Boundary Element Method (BEM), Monte-Carlo Simulations, etc. The generic model building approach presented here could be used for accurate and realistic modelling of human brain and many other biological objects.

  11. A hybrid method for the computation of quasi-3D seismograms.

    NASA Astrophysics Data System (ADS)

    Masson, Yder; Romanowicz, Barbara

    2013-04-01

    The development of powerful computer clusters and efficient numerical computation methods, such as the Spectral Element Method (SEM) made possible the computation of seismic wave propagation in a heterogeneous 3D earth. However, the cost of theses computations is still problematic for global scale tomography that requires hundreds of such simulations. Part of the ongoing research effort is dedicated to the development of faster modeling methods based on the spectral element method. Capdeville et al. (2002) proposed to couple SEM simulations with normal modes calculation (C-SEM). Nissen-Meyer et al. (2007) used 2D SEM simulations to compute 3D seismograms in a 1D earth model. Thanks to these developments, and for the first time, Lekic et al. (2011) developed a 3D global model of the upper mantle using SEM simulations. At the local and continental scale, adjoint tomography that is using a lot of SEM simulation can be implemented on current computers (Tape, Liu et al. 2009). Due to their smaller size, these models offer higher resolution. They provide us with images of the crust and the upper part of the mantle. In an attempt to teleport such local adjoint tomographic inversions into the deep earth, we are developing a hybrid method where SEM computation are limited to a region of interest within the earth. That region can have an arbitrary shape and size. Outside this region, the seismic wavefield is extrapolated to obtain synthetic data at the Earth's surface. A key feature of the method is the use of a time reversal mirror to inject the wavefield induced by distant seismic source into the region of interest (Robertsson and Chapman 2000). We compute synthetic seismograms as follow: Inside the region of interest, we are using regional spectral element software RegSEM to compute wave propagation in 3D. Outside this region, the wavefield is extrapolated to the surface by convolution with the Green's functions from the mirror to the seismic stations. For now, these

  12. Hypertext-based computer vision teaching packages

    NASA Astrophysics Data System (ADS)

    Marshall, A. David

    1994-10-01

    The World Wide Web Initiative has provided a means for providing hypertext and multimedia based information across the whole INTERNET. Many applications have been developed on such http servers. At Cardiff we have developed a http hypertext based multimedia server, the Cardiff Information Server, using the widely available Mosaic system. The server provides a variety of information ranging from the provision of teaching modules, on- line documentation, timetables for departmental activities to more light hearted hobby interests. One important and novel development to the server has been the development of courseware facilities. This ranges from the provision of on-line lecture notes, exercises and their solutions to more interactive teaching packages. A variety of disciplines have benefitted notably Computer Vision, and Image Processing but also C programming, X Windows, Computer Graphics and Parallel Computing. This paper will address the issues of the implementation of the Computer Vision and Image Processing packages, the advantages gained from using a hypertext based system and also will relate practical experiences of using the packages in a class environment. The paper addresses issues of how best to provide information in such a hypertext based system and how interactive image processing packages can be developed and integrated into courseware. The suite of tools developed facilitates a flexible and powerful courseware package that has proved popular in the classroom and over the Internet. The paper will also detail many future developments we see possible. One of the key points raised in the paper is that Mosaic's hypertext language (html) is extremely powerful and yet relatively straightforward to use. It is also possible to link in Unix calls so that programs and shells can be executed. This provides a powerful suite of utilities that can be exploited to develop many packages.

  13. Computed tomography measurement of 3D combustion chemiluminescence using single camera

    NASA Astrophysics Data System (ADS)

    Wang, Kuanliang; Li, Fei; Zeng, Hui; Zhang, Shaohua; Yu, Xilong

    2016-10-01

    Instantaneous measurement of flame spatial structure has been long desired for complicated combustion condition (gas turbine, ramjet et.). Three dimensional computed tomography of chemiluminescence (3D-CTC) is a potential testing technology for its simplicity, low cost, high temporal and spatial resolution. In most former studies, multi-lens and multi-CCD are used to capture projects from different view angles. In order to improve adaptability, only one CCD was utilized to build 3D-CTC system combined with customized fiber-based endoscopes (FBEs). It makes this technique more economic and simple. Validate experiments were made using 10 small CH4 diffusion flame arranging in a ring structure. Based on one instantaneous image, computed tomography can be conducted using Algebraic Reconstruction Technique (ART) algorithm. The reconstructed results, including the flame number, ring shape of the flames, the inner and outer diameter of ring, all well match the physical structure. It indicates that 3D combustion chemiluminescence could be well reconstructed using single camera.

  14. 3D histomorphometric quantification of trabecular bones by computed microtomography using synchrotron radiation.

    PubMed

    Nogueira, L P; Braz, D; Barroso, R C; Oliveira, L F; Pinheiro, C J G; Dreossi, D; Tromba, G

    2010-12-01

    Conventional bone histomorphometry is an important method for quantitative evaluation of bone microstructure. X-ray computed microtomography is a non-invasive technique, which can be used to evaluate histomorphometric indices in trabecular bones (BV/TV, BS/BV, Tb.N, Tb.Th, Tb.Sp). In this technique, 3D images are used to quantify the whole sample, differently from the conventional one, in which the quantification is performed in 2D slices and extrapolated for 3D case. In this work, histomorphometric quantification using synchrotron 3D X-ray computed microtomography was performed to quantify the bone structure at different skeletal sites as well as to investigate the effects of bone diseases on quantitative understanding of bone architecture. The images were obtained at Synchrotron Radiation for MEdical Physics (SYRMEP) beamline, at ELETTRA synchrotron radiation facility, Italy. Concerning the obtained results for normal and pathological bones from same skeletal sites and individuals, from our results, a certain declining bone volume fraction was achieved. The results obtained could be used in forming the basis for comparison of the bone microarchitecture and can be a valuable tool for predicting bone fragility. Copyright © 2010 Elsevier Ltd. All rights reserved.

  15. Can computational goals inform theories of vision?

    PubMed

    Anderson, Barton L

    2015-04-01

    One of the most lasting contributions of Marr's posthumous book is his articulation of the different "levels of analysis" that are needed to understand vision. Although a variety of work has examined how these different levels are related, there is comparatively little examination of the assumptions on which his proposed levels rest, or the plausibility of the approach Marr articulated given those assumptions. Marr placed particular significance on computational level theory, which specifies the "goal" of a computation, its appropriateness for solving a particular problem, and the logic by which it can be carried out. The structure of computational level theory is inherently teleological: What the brain does is described in terms of its purpose. I argue that computational level theory, and the reverse-engineering approach it inspires, requires understanding the historical trajectory that gave rise to functional capacities that can be meaningfully attributed with some sense of purpose or goal, that is, a reconstruction of the fitness function on which natural selection acted in shaping our visual abilities. I argue that this reconstruction is required to distinguish abilities shaped by natural selection-"natural tasks" -from evolutionary "by-products" (spandrels, co-optations, and exaptations), rather than merely demonstrating that computational goals can be embedded in a Bayesian model that renders a particular behavior or process rational. Copyright © 2015 Cognitive Science Society, Inc.

  16. The RNA 3D Motif Atlas: Computational methods for extraction, organization and evaluation of RNA motifs.

    PubMed

    Parlea, Lorena G; Sweeney, Blake A; Hosseini-Asanjan, Maryam; Zirbel, Craig L; Leontis, Neocles B

    2016-07-01

    RNA 3D motifs occupy places in structured RNA molecules that correspond to the hairpin, internal and multi-helix junction "loops" of their secondary structure representations. As many as 40% of the nucleotides of an RNA molecule can belong to these structural elements, which are distinct from the regular double helical regions formed by contiguous AU, GC, and GU Watson-Crick basepairs. With the large number of atomic- or near atomic-resolution 3D structures appearing in a steady stream in the PDB/NDB structure databases, the automated identification, extraction, comparison, clustering and visualization of these structural elements presents an opportunity to enhance RNA science. Three broad applications are: (1) identification of modular, autonomous structural units for RNA nanotechnology, nanobiology and synthetic biology applications; (2) bioinformatic analysis to improve RNA 3D structure prediction from sequence; and (3) creation of searchable databases for exploring the binding specificities, structural flexibility, and dynamics of these RNA elements. In this contribution, we review methods developed for computational extraction of hairpin and internal loop motifs from a non-redundant set of high-quality RNA 3D structures. We provide a statistical summary of the extracted hairpin and internal loop motifs in the most recent version of the RNA 3D Motif Atlas. We also explore the reliability and accuracy of the extraction process by examining its performance in clustering recurrent motifs from homologous ribosomal RNA (rRNA) structures. We conclude with a summary of remaining challenges, especially with regard to extraction of multi-helix junction motifs. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Parallel Adaptive Computation of Blood Flow in a 3D ``Whole'' Body Model

    NASA Astrophysics Data System (ADS)

    Zhou, M.; Figueroa, C. A.; Taylor, C. A.; Sahni, O.; Jansen, K. E.

    2008-11-01

    Accurate numerical simulations of vascular trauma require the consideration of a larger portion of the vasculature than previously considered, due to the systemic nature of the human body's response. A patient-specific 3D model composed of 78 connected arterial branches extending from the neck to the lower legs is constructed to effectively represent the entire body. Recently developed outflow boundary conditions that appropriately represent the downstream vasculature bed which is not included in the 3D computational domain are applied at 78 outlets. In this work, the pulsatile blood flow simulations are started on a fairly uniform, unstructured mesh that is subsequently adapted using a solution-based approach to efficiently resolve the flow features. The adapted mesh contains non-uniform, anisotropic elements resulting in resolution that conforms with the physical length scales present in the problem. The effects of the mesh resolution on the flow field are studied, specifically on relevant quantities of pressure, velocity and wall shear stress.

  18. Effect of Random Geometric Uncertainty on the Computational Design of a 3-D Flexible Wing

    NASA Technical Reports Server (NTRS)

    Gumbert, C. R.; Newman, P. A.; Hou, G. J.-W.

    2002-01-01

    The effect of geometric uncertainty due to statistically independent, random, normally distributed shape parameters is demonstrated in the computational design of a 3-D flexible wing. A first-order second-moment statistical approximation method is used to propagate the assumed input uncertainty through coupled Euler CFD aerodynamic / finite element structural codes for both analysis and sensitivity analysis. First-order sensitivity derivatives obtained by automatic differentiation are used in the input uncertainty propagation. These propagated uncertainties are then used to perform a robust design of a simple 3-D flexible wing at supercritical flow conditions. The effect of the random input uncertainties is shown by comparison with conventional deterministic design results. Sample results are shown for wing planform, airfoil section, and structural sizing variables.

  19. Computer-assisted three-dimensional surgical planning: 3D virtual articulator: technical note.

    PubMed

    Ghanai, S; Marmulla, R; Wiechnik, J; Mühling, J; Kotrikova, B

    2010-01-01

    This study presents a computer-assisted planning system for dysgnathia treatment. It describes the process of information gathering using a virtual articulator and how the splints are constructed for orthognathic surgery. The deviation of the virtually planned splints is shown in six cases on the basis of conventionally planned cases. In all cases the plaster models were prepared and scanned using a 3D laser scanner. Successive lateral and posterior-anterior cephalometric images were used for reconstruction before surgery. By identifying specific points on the X-rays and marking them on the virtual models, it was possible to enhance the 2D images to create a realistic 3D environment and to perform virtual repositioning of the jaw. A hexapod was used to transfer the virtual planning to the real splints. Preliminary results showed that conventional repositioning could be replicated using the virtual articulator.

  20. 3D image reconstruction on x-ray micro-computed tomography

    NASA Astrophysics Data System (ADS)

    Louk, Andreas C.

    2015-03-01

    A model for 3D image reconstruction of x-ray micro-computed tomography scanner (micro-CTScan) has been developed. A small object has been put under inspection on an x-ray micro-CTScan. The object cross-section was assumed on the x-y plane, while its height was along the z-axis. Using a radiography plane detector, a set of digital radiographs represents multiple angle of views from 0º to 360º with an interval of 1º was obtained. Then, a set of crosssectional tomography, slice by slice was reconstructed. At the end, all image slices were stacked together sequentially to obtain a 3D image model of the object being inspected. From this development, lessons on the way to have better understanding on the internal structure of the object can be approached based on the cross-sectional image slice by slice and surface skin.

  1. Application of 3D-computed tomography angiography technology in large meningioma resection.

    PubMed

    Chen, Jian-Qiang; Guan, Yin; Li, Gang; Li, Xiao-Hua; Zhan, Yue-Fu; Li, Xiang-Yin; Nie, Liu; Han, Xiang-Jun

    2012-07-01

    To discuss the role of 3D-computed tomography angiography (3D-CTA) technology in reducing injuries of large meningioma surgery. 3D-CTA preoperative examinations were done in 473 patients with large meningioma (simulated group). The images were analyzed by 3D post-processing workstation. By observing the major intracranial blood vessels, venous sinus, and the compression and invasion pattern in the nerve region, assessing risk level of the surgery, simulating the surgical procedures, the surgical removal plan, surgical routes and tumor blood-supplying artery embolisation plan were performed. Two hundred and fifty seven large meningioma patients who didn't underwent 3D-CTA preoperative examination served as control group. The incidence of postoperative complications, intraoperative blood transfusion and the operation time were compared between these two groups. Compared with the control group, the Simpson's grade I and II resection rate was 80.3% (380/473), similar with that of the control (81.3%, 209/257). The incidence of postoperative complications in 3D-CTA simulated group was 37.0% which was significantly lower than that (48.2%) of the control (P<0.01). The intraoperative blood supply for simulated group and the control was (523.4±208.1) mL and (592.0±263.3) mL, respectively, with significant difference between two groups (P<0.01). And the operation time [(314.8±106.3)] min was significantly lower in simulated group than that in the control [(358.4±147.9) min] (P<0.01). Application of 3D-CTA imaging technology in risk level assessment before large-scaled meningioma resection could assist in the rational planning of tumor resectin, surgical routes, and is helpful in reducing injuries and complications and enhancing the prognosis of the patients. Copyright © 2012 Hainan Medical College. Published by Elsevier B.V. All rights reserved.

  2. Calcaneal osteotomy preoperative planning system with 3D full-sized computer-assisted technology.

    PubMed

    Chou, Yi-Jiun; Sun, Shuh-Ping; Liu, Hsin-Hua

    2011-10-01

    In this study, we developed a CT-based computer-assisted pre-operative planning and simulating system for the calcaneal osteotomy by integrating different software's function. This system uses the full-scaled 3D reverse engineering technique in designing and developing preoperative planning modules for the calcaneal osteotomy surgery. The planning system presents a real-sized three-dimensional image of the calcaneus, and provides detailed interior measurements of the calcaneus from various cutting planes. This study applied computer-assisted technology to integrate different software's function to a surgical planning system. These functions include 3-D image model capturing, cutting, moving, rotating and measurement for relevant foot anatomy, and can be integrated as the user's function. Furthermore, the system is computer-based and computer-assisted technology. Surgeons can utilize it as part of preoperative planning to develop efficient operative procedures. This system also has a database that can be updated and extended and will provide the clinical cases to different users for experienced based learning.

  3. A novel iterative computation algorithm for Kinoform of 3D object

    NASA Astrophysics Data System (ADS)

    Jiang, Xiao-yu; Chuang, Pei; Wang, Xi; Zong, Yantao

    2012-11-01

    A novel method for computing kinoform of 3D object based on traditional iterate Fourier transform algorithm(IFTA) is proposed in this paper. Kinoform is a special kind of computer-generated holograms (CGH) which has very high diffraction efficiency since it only modulates the phase of illuminated light and doesn't have cross-interference from conjugate image. The traditional IFTA arithmetic assumes that reconstruction image is in infinity area(Fraunhofer diffraction region), and ignores the deepness of 3D object ,so it can only calculate two-dimensional kinoform. The proposed algorithm in this paper divides three-dimensional object into several object planes in deepness and treat every object plane as a target image then iterate computation is carried out between one input plane(kinoform) and multi-output planes(reconstruction images) .A space phase factor is added into iterate process to represent depth characters of 3D object, then reconstruction images is in Fresnel diffraction region. Optics reconstructed experiment of kinoform computed by this method is realized based on Liquid Crystals on Silicon (LCoS) Spatial Light Modulator(SLM). Mean Square Error(MSE) and Structure Similarity(SSIM) between original and reconstruction image is used to evaluate this method. The experimental result shows that this algorithm speed is fast and the result kinoform can reconstruct the object in different plane with high precision under the illumination of plane wave. The reconstruction images provide space sense of three-dimensional visual effect. At last, the influence of space and shelter between different object planes to reconstruction image is also discussed in the experiment.

  4. Local spatial frequency analysis for computer vision

    NASA Technical Reports Server (NTRS)

    Krumm, John; Shafer, Steven A.

    1990-01-01

    A sense of vision is a prerequisite for a robot to function in an unstructured environment. However, real-world scenes contain many interacting phenomena that lead to complex images which are difficult to interpret automatically. Typical computer vision research proceeds by analyzing various effects in isolation (e.g., shading, texture, stereo, defocus), usually on images devoid of realistic complicating factors. This leads to specialized algorithms which fail on real-world images. Part of this failure is due to the dichotomy of useful representations for these phenomena. Some effects are best described in the spatial domain, while others are more naturally expressed in frequency. In order to resolve this dichotomy, we present the combined space/frequency representation which, for each point in an image, shows the spatial frequencies at that point. Within this common representation, we develop a set of simple, natural theories describing phenomena such as texture, shape, aliasing and lens parameters. We show these theories lead to algorithms for shape from texture and for dealiasing image data. The space/frequency representation should be a key aid in untangling the complex interaction of phenomena in images, allowing automatic understanding of real-world scenes.

  5. Computational Analysis of the Transonic Dynamics Tunnel Using FUN3D

    NASA Technical Reports Server (NTRS)

    Chwalowski, Pawel; Quon, Eliot; Brynildsen, Scott E.

    2016-01-01

    This paper presents results from an exploratory two-year effort of applying Computational Fluid Dynamics (CFD) to analyze the empty-tunnel flow in the NASA Langley Research Center Transonic Dynamics Tunnel (TDT). The TDT is a continuous-flow, closed circuit, 16- x 16-foot slotted-test-section wind tunnel, with capabilities to use air or heavy gas as a working fluid. In this study, experimental data acquired in the empty tunnel using the R-134a test medium was used to calibrate the computational data. The experimental calibration data includes wall pressures, boundary-layer profiles, and the tunnel centerline Mach number profiles. Subsonic and supersonic flow regimes were considered, focusing on Mach 0.5, 0.7 and Mach 1.1 in the TDT test section. This study discusses the computational domain, boundary conditions, and initial conditions selected and the resulting steady-state analyses using NASA's FUN3D CFD software.

  6. Computational Analysis of the Transonic Dynamics Tunnel Using FUN3D

    SciTech Connect

    Chwalowski, Pawel; Quon, Eliot; Brynildsen, Scott E.

    2016-01-04

    This paper presents results from an explanatory two-year effort of applying Computational Fluid Dynamics (CFD) to analyze the empty-tunnel flow in the NASA Langley Research Center Transonic Dynamics Tunnel (TDT). The TDT is a continuous-flow, closed circuit, 16- x 16-foot slotted-test-section wind tunnel, with capabilities to use air or heavy gas as a working fluid. In this study, experimental data acquired in the empty tunnel using the R-134a test medium was used to calibrate the computational data. The experimental calibration data includes wall pressures, boundary-layer profiles, and the tunnel centerline Mach number profiles. Subsonic and supersonic flow regimes were considered, focusing on Mach 0.5, 0.7 and Mach 1.1 in the TDT test section. This study discusses the computational domain, boundary conditions, and initial conditions selected in the resulting steady-state analyses using NASA's FUN3D CFD software.

  7. 3D object optonumerical acquisition methods for CAD/CAM and computer graphics systems

    NASA Astrophysics Data System (ADS)

    Sitnik, Robert; Kujawinska, Malgorzata; Pawlowski, Michal E.; Woznicki, Jerzy M.

    1999-08-01

    The creation of a virtual object for CAD/CAM and computer graphics on the base of data gathered by full-field optical measurement of 3D object is presented. The experimental co- ordinates are alternatively obtained by combined fringe projection/photogrammetry based system or fringe projection/virtual markers setup. The new and fully automatic procedure which process the cloud of measured points into triangular mesh accepted by CAD/CAM and computer graphics systems is presented. Its applicability for various classes of objects is tested including the error analysis of virtual objects generated. The usefulness of the method is proved by applying the virtual object in rapid prototyping system and in computer graphics environment.

  8. Development of computer program NAS3D using Vector processing for geometric nonlinear analysis of structures

    NASA Technical Reports Server (NTRS)

    Mangalgiri, P. D.; Prabhakaran, R.

    1986-01-01

    An algorithm for vectorized computation of stiffness matrices of an 8 noded isoparametric hexahedron element for geometric nonlinear analysis was developed. This was used in conjunction with the earlier 2-D program GAMNAS to develop the new program NAS3D for geometric nonlinear analysis. A conventional, modified Newton-Raphson process is used for the nonlinear analysis. New schemes for the computation of stiffness and strain energy release rates is presented. The organization the program is explained and some results on four sample problems are given. The study of CPU times showed that savings by a factor of 11 to 13 were achieved when vectorized computation was used for the stiffness instead of the conventional scalar one. Finally, the scheme of inputting data is explained.

  9. Computer-aided periodontal disease diagnosis using computer vision.

    PubMed

    Juan, M C; Alcañiz, M; Monserrat, C; Grau, V; Knoll, C

    1999-01-01

    Periodontal diseases are the major cause of tooth loss. The study of the evolution of these diseases is crucial to achieve adequate planning and treatment. Depth probing is essential to know the periodontal disease stage. In this paper we present a new system for Computer-Aided Periodontal Disease Diagnosis using computer vision. The system automates the depth probing and incorporates a colour camera fitted together with a plastic probe that automatically and exactly obtains the depth probing measure. The system has been tested by several periodontists and with 125 teeth of different patients. The differences between the values taken by the system and two periodontists have not been significant.

  10. New weather depiction technology for night vision goggle (NVG) training: 3D virtual/augmented reality scene-weather-atmosphere-target simulation

    NASA Astrophysics Data System (ADS)

    Folaron, Michelle; Deacutis, Martin; Hegarty, Jennifer; Vollmerhausen, Richard; Schroeder, John; Colby, Frank P.

    2007-04-01

    US Navy and Marine Corps pilots receive Night Vision Goggle (NVG) training as part of their overall training to maintain the superiority of our forces. This training must incorporate realistic targets; backgrounds; and representative atmospheric and weather effects they may encounter under operational conditions. An approach for pilot NVG training is to use the Night Imaging and Threat Evaluation Laboratory (NITE Lab) concept. The NITE Labs utilize a 10' by 10' static terrain model equipped with both natural and cultural lighting that are used to demonstrate various illumination conditions, and visual phenomena which might be experienced when utilizing night vision goggles. With this technology, the military can safely, systematically, and reliably expose pilots to the large number of potentially dangerous environmental conditions that will be experienced in their NVG training flights. A previous SPIE presentation described our work for NAVAIR to add realistic atmospheric and weather effects to the NVG NITE Lab training facility using the NVG - WDT(Weather Depiction Technology) system (Colby, et al.). NVG -WDT consist of a high end multiprocessor server with weather simulation software, and several fixed and goggle mounted Heads Up Displays (HUDs). Atmospheric and weather effects are simulated using state-of-the-art computer codes such as the WRF (Weather Research μ Forecasting) model; and the US Air Force Research Laboratory MODTRAN radiative transport model. Imagery for a variety of natural and man-made obscurations (e.g. rain, clouds, snow, dust, smoke, chemical releases) are being calculated and injected into the scene observed through the NVG via the fixed and goggle mounted HUDs. This paper expands on the work described in the previous presentation and will describe the 3D Virtual/Augmented Reality Scene - Weather - Atmosphere - Target Simulation part of the NVG - WDT. The 3D virtual reality software is a complete simulation system to generate realistic

  11. Using Computer Vision to Access Appliance Displays.

    PubMed

    Fusco, Giovanni; Tekin, Ender; Ladner, Richard E; Coughlan, James M

    2014-01-01

    People who are blind or visually impaired face difficulties accessing a growing array of everyday appliances, needed to perform a variety of daily activities, because they are equipped with electronic displays. We are developing a "Display Reader" smartphone app, which uses computer vision to help a user acquire a usable image of a display, to address this problem. The current prototype analyzes video from the smartphone's camera, providing real-time feedback to guide the user until a satisfactory image is acquired, based on automatic estimates of image blur and glare. Formative studies were conducted with several blind and visually impaired participants, whose feedback is guiding the development of the user interface. The prototype software has been released as a Free and Open Source (FOSS) project.

  12. On computer vision in wireless sensor networks.

    SciTech Connect

    Berry, Nina M.; Ko, Teresa H.

    2004-09-01

    Wireless sensor networks allow detailed sensing of otherwise unknown and inaccessible environments. While it would be beneficial to include cameras in a wireless sensor network because images are so rich in information, the power cost of transmitting an image across the wireless network can dramatically shorten the lifespan of the sensor nodes. This paper describe a new paradigm for the incorporation of imaging into wireless networks. Rather than focusing on transmitting images across the network, we show how an image can be processed locally for key features using simple detectors. Contrasted with traditional event detection systems that trigger an image capture, this enables a new class of sensors which uses a low power imaging sensor to detect a variety of visual cues. Sharing these features among relevant nodes cues specific actions to better provide information about the environment. We report on various existing techniques developed for traditional computer vision research which can aid in this work.

  13. Computer vision for high content screening.

    PubMed

    Kraus, Oren Z; Frey, Brendan J

    2016-01-01

    High Content Screening (HCS) technologies that combine automated fluorescence microscopy with high throughput biotechnology have become powerful systems for studying cell biology and drug screening. These systems can produce more than 100 000 images per day, making their success dependent on automated image analysis. In this review, we describe the steps involved in quantifying microscopy images and different approaches for each step. Typically, individual cells are segmented from the background using a segmentation algorithm. Each cell is then quantified by extracting numerical features, such as area and intensity measurements. As these feature representations are typically high dimensional (>500), modern machine learning algorithms are used to classify, cluster and visualize cells in HCS experiments. Machine learning algorithms that learn feature representations, in addition to the classification or clustering task, have recently advanced the state of the art on several benchmarking tasks in the computer vision community. These techniques have also recently been applied to HCS image analysis.

  14. Pore detection in Computed Tomography (CT) soil 3D images using singularity map analysis

    NASA Astrophysics Data System (ADS)

    Sotoca, Juan J. Martin; Tarquis, Ana M.; Saa Requejo, Antonio; Grau, Juan B.

    2016-04-01

    X-ray Computed Tomography (CT) images have significantly helped the study of the internal soil structure. This technique has two main advantages: 1) it is a non-invasive technique, i.e., it doesńt modify the internal soil structure, and 2) it provides a good resolution. The major disadvantage is that these images are sometimes low-contrast in the solid/pore interface. One of the main problems in analyzing soil structure through CT images is to segment them in solid/pore space. To do so, we have different segmentation techniques at our disposal that are mainly based on thresholding methods in which global or local thresholds are calculated to separate pore space from solid space. The aim of this presentation is to develop the fractal approach to soil structure using "singularity maps" and the "Concentration-Area (CA) method". We will establish an analogy between mineralization processes in ore deposits and morphogenesis processes in soils. Resulting from this analogy a new 3D segmentation method is proposed, the "3D Singularity-CA" method. A comparison with traditional 3D segmentation methods will be performed to show the main differences among them.

  15. First direct 3D visualisation of microstructural evolutions during sintering through X-ray computed microtomography

    SciTech Connect

    Bernard, Dominique . E-mail: bernard@icmcb.u-bordeaux.fr; Gendron, Damien; Heintz, Jean-Marc; Bordere, Sylvie; Etourneau, Jean

    2005-01-03

    X-ray computed microtomography (XCMT) has been applied to ceramic samples of different materials to visualise, for the first time at this scale, real 3D microstructural evolutions during sintering. Using this technique, it has been possible to follow the whole sintering process of the same grains set. Two materials have been studied; a glass powder heat treated at 700 deg. C and a crystallised lithium borate (Li{sub 6}Gd(BO{sub 3}){sub 3}) powder heat treated at 720 deg. C. XCMT measurements have been done after different sintering times. For each material, a sub-volume was individualised and localised on the successive recordings and its 3D images numerically reconstructed. Description of the three-dimensional microstructures evolution is proposed. From the 3D experimental data, quantitative evolutions of parameters such as porosity and neck size are presented for the glass sample. Possibilities offered by this technique to study complex sintering processes, as for lithium borate, are illustrated.

  16. 3D cephalometric analysis obtained from computed tomography. Review of the literature

    PubMed Central

    Rossini, Giulia; Cavallini, Costanza; Cassetta, Michele; Barbato, Ersilia

    2012-01-01

    Summary Introduction The aim of this systematic review is to estimate accuracy and reproducibility of craniometric measurements and reliability of landmarks identified with computed tomography (CT) techniques in 3D cephalometric analysis. Methods Computerized and manual searches were conducted up to 2011 for studies that addressed these objectives. The selection criteria were: (1) the use of human specimen; (2) the comparison between 2D and 3D cephalometric analysis; (3) the assessment of accuracy, reproducibility of measurements and reliability of landmark identification with CT images compared with two-dimensional conventional radiographs. The Cochrane Handbook for Systematic Reviews of Interventions was used as the guideline for this article. Results Twenty-seven articles met the inclusion criteria. Most of them demonstrated high measurements accuracy and reproducibility, and landmarks reliability, but their cephalometric analysis methodology varied widely. Conclusion These differencies among the studies in making measurements don’t permit a direct comparison between them. The future developments in the knowledge of these techniques should provide a standardized method to conduct the 3D CT cephalometric analysis. PMID:22545187

  17. New solutions for industrial inspection based on 3D computer tomography

    NASA Astrophysics Data System (ADS)

    Kroll, Julia; Effenberger, Ira; Verl, Alexander

    2008-04-01

    In recent years the requirements of industrial applications relating to image processing have significantly increased. According to fast and modern production processes and optimized manufacturing of high quality products, new ways of image acquisition and analysis are needed. Here the industrial computer tomography (CT) as a non-destructive technology for 3D data generation meets this challenge by offering the possibility of complete inspection of complex industrial parts with all outer and inner geometric features. Consequently CT technology is well suited for different kinds of industrial image-based applications in the field of quality assurance like material testing or first article inspection. Moreover surface reconstruction and reverse engineering applications will benefit from CT. In this paper our new methods for efficient 3D CT-image processing are presented. This includes improved solutions for 3D surface reconstruction, innovative approaches of CAD-based segmentation in the CT volume data and the automatic geometric feature detection in complex parts. However the aspect of accuracy is essential in the field of metrology. In order to enhance precision the CT sensor can be combined with other, more accurate sensor systems generating measure points for CT data correction. All algorithms are applied to real data sets in order to demonstrate our tools.

  18. Registration of 3D ultrasound computer tomography and MRI for evaluation of tissue correspondences

    NASA Astrophysics Data System (ADS)

    Hopp, T.; Dapp, R.; Zapf, M.; Kretzek, E.; Gemmeke, H.; Ruiter, N. V.

    2015-03-01

    3D Ultrasound Computer Tomography (USCT) is a new imaging method for breast cancer diagnosis. In the current state of development it is essential to correlate USCT with a known imaging modality like MRI to evaluate how different tissue types are depicted. Due to different imaging conditions, e.g. with the breast subject to buoyancy in USCT, a direct correlation is demanding. We present a 3D image registration method to reduce positioning differences and allow direct side-by-side comparison of USCT and MRI volumes. It is based on a two-step approach including a buoyancy simulation with a biomechanical model and free form deformations using cubic B-Splines for a surface refinement. Simulation parameters are optimized patient-specifically in a simulated annealing scheme. The method was evaluated with in-vivo datasets resulting in an average registration error below 5mm. Correlating tissue structures can thereby be located in the same or nearby slices in both modalities and three-dimensional non-linear deformations due to the buoyancy are reduced. Image fusion of MRI volumes and USCT sound speed volumes was performed for intuitive display. By applying the registration to data of our first in-vivo study with the KIT 3D USCT, we could correlate several tissue structures in MRI and USCT images and learn how connective tissue, carcinomas and breast implants observed in the MRI are depicted in the USCT imaging modes.

  19. Computational-optical microscopy for 3D biological imaging beyond the diffraction limit

    NASA Astrophysics Data System (ADS)

    Grover, Ginni

    In recent years, super-resolution imaging has become an important fluorescent microscopy tool. It has enabled imaging of structures smaller than the optical diffraction limit with resolution less than 50 nm. Extension to high-resolution volume imaging has been achieved by integration with various optical techniques. In this thesis, development of a fluorescent microscope to enable high resolution, extended depth, three dimensional (3D) imaging is discussed; which is achieved by integration of computational methods with optical systems. In the first part of the thesis, point spread function (PSF) engineering for volume imaging is discussed. A class of PSFs, referred to as double-helix (DH) PSFs, is generated. The PSFs exhibit two focused spots in the image plane which rotate about the optical axis, encoding depth in rotation of the image. These PSFs extend the depth-of-field up to a factor of ˜5. Precision performance of the DH-PSFs, based on an information theoretical analysis, is compared with other 3D methods with conclusion that the DH-PSFs provide the best precision and the longest depth-of-field. Out of various possible DH-PSFs, a suitable PSF is obtained for super-resolution microscopy. The DH-PSFs are implemented in imaging systems, such as a microscope, with a special phase modulation at the pupil plane. Surface-relief elements which are polarization-insensitive and ˜90% light efficient are developed for phase modulation. The photon-efficient DH-PSF microscopes thus developed are used, along with optimal position estimation algorithms, for tracking and super-resolution imaging in 3D. Imaging at depths-of-field of up to 2.5 microm is achieved without focus scanning. Microtubules were imaged with 3D resolution of (6, 9, 39) nm, which is in close agreement with the theoretical limit. A quantitative study of co-localization of two proteins in volume was conducted in live bacteria. In the last part of the thesis practical aspects of the DH-PSF microscope are

  20. Multigrid Computations of 3-D Incompressible Internal and External Viscous Rotating Flows

    NASA Technical Reports Server (NTRS)

    Sheng, Chunhua; Taylor, Lafayette K.; Chen, Jen-Ping; Jiang, Min-Yee; Whitfield, David L.

    1996-01-01

    This report presents multigrid methods for solving the 3-D incompressible viscous rotating flows in a NASA low-speed centrifugal compressor and a marine propeller 4119. Numerical formulations are given in both the rotating reference frame and the absolute frame. Comparisons are made for the accuracy, efficiency, and robustness between the steady-state scheme and the time-accurate scheme for simulating viscous rotating flows for complex internal and external flow applications. Prospects for further increase in efficiency and accuracy of unsteady time-accurate computations are discussed.

  1. High-performance computational and geostatistical experiments for testing the capabilities of 3-d electrical tomography

    SciTech Connect

    Carle, S. F.; Daily, W. D.; Newmark, R. L.; Ramirez, A.; Tompson, A.

    1999-01-19

    This project explores the feasibility of combining geologic insight, geostatistics, and high-performance computing to analyze the capabilities of 3-D electrical resistance tomography (ERT). Geostatistical methods are used to characterize the spatial variability of geologic facies that control sub-surface variability of permeability and electrical resistivity Synthetic ERT data sets are generated from geostatistical realizations of alluvial facies architecture. The synthetic data sets enable comparison of the "truth" to inversion results, quantification of the ability to detect particular facies at particular locations, and sensitivity studies on inversion parameters

  2. Computing 3-D steady supersonic flow via a new Lagrangian approach

    NASA Technical Reports Server (NTRS)

    Loh, C. Y.; Liou, M.-S.

    1993-01-01

    The new Lagrangian method introduced by Loh and Hui (1990) is extended for 3-D steady supersonic flow computation. Details of the conservation form, the implementation of the local Riemann solver, and the Godunov and the high resolution TVD schemes are presented. The new approach is robust yet accurate, capable of handling complicated geometry and reactions between discontinuous waves. It keeps all the advantages claimed in the 2-D method of Loh and Hui, e.g., crisp resolution for a slip surface (contact discontinuity) and automatic grid generation along the stream.

  3. Computing 3-D steady supersonic flow via a new Lagrangian approach

    NASA Technical Reports Server (NTRS)

    Loh, C. Y.; Liou, M.-S.

    1993-01-01

    The new Lagrangian method introduced by Loh and Hui (1990) is extended for 3-D steady supersonic flow computation. Details of the conservation form, the implementation of the local Riemann solver, and the Godunov and the high resolution TVD schemes are presented. The new approach is robust yet accurate, capable of handling complicated geometry and reactions between discontinuous waves. It keeps all the advantages claimed in the 2-D method of Loh and Hui, e.g., crisp resolution for a slip surface (contact discontinuity) and automatic grid generation along the stream.

  4. Reliability of clinically relevant 3D foot bone angles from quantitative computed tomography

    PubMed Central

    2013-01-01

    Background Surgical treatment and clinical management of foot pathology requires accurate, reliable assessment of foot deformities. Foot and ankle deformities are multi-planar and therefore difficult to quantify by standard radiographs. Three-dimensional (3D) imaging modalities have been used to define bone orientations using inertial axes based on bone shape, but these inertial axes can fail to mimic established bone angles used in orthopaedics and clinical biomechanics. To provide improved clinical relevance of 3D bone angles, we developed techniques to define bone axes using landmarks on quantitative computed tomography (QCT) bone surface meshes. We aimed to assess measurement precision of landmark-based, 3D bone-to-bone orientations of hind foot and lesser tarsal bones for expert raters and a template-based automated method. Methods Two raters completed two repetitions each for twenty feet (10 right, 10 left), placing anatomic landmarks on the surfaces of calcaneus, talus, cuboid, and navicular. Landmarks were also recorded using the automated, template-based method. For each method, 3D bone axes were computed from landmark positions, and Cardan sequences produced sagittal, frontal, and transverse plane angles of bone-to-bone orientations. Angular reliability was assessed using intraclass correlation coefficients (ICCs) and the root mean square standard deviation (RMS-SD) for intra-rater and inter-rater precision, and rater versus automated agreement. Results Intra- and inter-rater ICCs were generally high (≥ 0.80), and the ICCs for each rater compared to the automated method were similarly high. RMS-SD intra-rater precision ranged from 1.4 to 3.6° and 2.4 to 6.1°, respectively, for the two raters, which compares favorably to uni-planar radiographic precision. Greatest variability was in Navicular: Talus sagittal plane angle and Cuboid: Calcaneus frontal plane angle. Precision of the automated, atlas-based template method versus the raters was comparable to

  5. Audio-visual perception of 3D cinematography: an fMRI study using condition-based and computation-based analyses.

    PubMed

    Ogawa, Akitoshi; Bordier, Cecile; Macaluso, Emiliano

    2013-01-01

    The use of naturalistic stimuli to probe sensory functions in the human brain is gaining increasing interest. Previous imaging studies examined brain activity associated with the processing of cinematographic material using both standard "condition-based" designs, as well as "computational" methods based on the extraction of time-varying features of the stimuli (e.g. motion). Here, we exploited both approaches to investigate the neural correlates of complex visual and auditory spatial signals in cinematography. In the first experiment, the participants watched a piece of a commercial movie presented in four blocked conditions: 3D vision with surround sounds (3D-Surround), 3D with monaural sound (3D-Mono), 2D-Surround, and 2D-Mono. In the second experiment, they watched two different segments of the movie both presented continuously in 3D-Surround. The blocked presentation served for standard condition-based analyses, while all datasets were submitted to computation-based analyses. The latter assessed where activity co-varied with visual disparity signals and the complexity of auditory multi-sources signals. The blocked analyses associated 3D viewing with the activation of the dorsal and lateral occipital cortex and superior parietal lobule, while the surround sounds activated the superior and middle temporal gyri (S/MTG). The computation-based analyses revealed the effects of absolute disparity in dorsal occipital and posterior parietal cortices and of disparity gradients in the posterior middle temporal gyrus plus the inferior frontal gyrus. The complexity of the surround sounds was associated with activity in specific sub-regions of S/MTG, even after accounting for changes of sound intensity. These results demonstrate that the processing of naturalistic audio-visual signals entails an extensive set of visual and auditory areas, and that computation-based analyses can track the contribution of complex spatial aspects characterizing such life-like stimuli.

  6. Topographic Mapping of Residual Vision by Computer

    ERIC Educational Resources Information Center

    MacKeben, Manfred

    2008-01-01

    Many persons with low vision have diseases that damage the retina only in selected areas, which can lead to scotomas (blind spots) in perception. The most frequent of these diseases is age-related macular degeneration (AMD), in which foveal vision is often impaired by a central scotoma that impairs vision of fine detail and causes problems with…

  7. Topographic Mapping of Residual Vision by Computer

    ERIC Educational Resources Information Center

    MacKeben, Manfred

    2008-01-01

    Many persons with low vision have diseases that damage the retina only in selected areas, which can lead to scotomas (blind spots) in perception. The most frequent of these diseases is age-related macular degeneration (AMD), in which foveal vision is often impaired by a central scotoma that impairs vision of fine detail and causes problems with…

  8. Airport databases for 3D synthetic-vision flight-guidance displays: database design, quality assessment, and data generation

    NASA Astrophysics Data System (ADS)

    Friedrich, Axel; Raabe, Helmut; Schiefele, Jens; Doerr, Kai Uwe

    1999-07-01

    In future aircraft cockpit designs SVS (Synthetic Vision System) databases will be used to display 3D physical and virtual information to pilots. In contrast to pure warning systems (TAWS, MSAW, EGPWS) SVS serve to enhance pilot spatial awareness by 3-dimensional perspective views of the objects in the environment. Therefore all kind of aeronautical relevant data has to be integrated into the SVS-database: Navigation- data, terrain-data, obstacles and airport-Data. For the integration of all these data the concept of a GIS (Geographical Information System) based HQDB (High-Quality- Database) has been created at the TUD (Technical University Darmstadt). To enable database certification, quality- assessment procedures according to ICAO Annex 4, 11, 14 and 15 and RTCA DO-200A/EUROCAE ED76 were established in the concept. They can be differentiated in object-related quality- assessment-methods following the keywords accuracy, resolution, timeliness, traceability, assurance-level, completeness, format and GIS-related quality assessment methods with the keywords system-tolerances, logical consistence and visual quality assessment. An airport database is integrated in the concept as part of the High-Quality- Database. The contents of the HQDB are chosen so that they support both Flight-Guidance-SVS and other aeronautical applications like SMGCS (Surface Movement and Guidance Systems) and flight simulation as well. Most airport data are not available. Even though data for runways, threshold, taxilines and parking positions were to be generated by the end of 1997 (ICAO Annex 11 and 15) only a few countries fulfilled these requirements. For that reason methods of creating and certifying airport data have to be found. Remote sensing and digital photogrammetry serve as means to acquire large amounts of airport objects with high spatial resolution and accuracy in much shorter time than with classical surveying methods. Remotely sensed images can be acquired from satellite

  9. Non-Boolean computing with nanomagnets for computer vision applications

    NASA Astrophysics Data System (ADS)

    Bhanja, Sanjukta; Karunaratne, D. K.; Panchumarthy, Ravi; Rajaram, Srinath; Sarkar, Sudeep

    2016-02-01

    The field of nanomagnetism has recently attracted tremendous attention as it can potentially deliver low-power, high-speed and dense non-volatile memories. It is now possible to engineer the size, shape, spacing, orientation and composition of sub-100 nm magnetic structures. This has spurred the exploration of nanomagnets for unconventional computing paradigms. Here, we harness the energy-minimization nature of nanomagnetic systems to solve the quadratic optimization problems that arise in computer vision applications, which are computationally expensive. By exploiting the magnetization states of nanomagnetic disks as state representations of a vortex and single domain, we develop a magnetic Hamiltonian and implement it in a magnetic system that can identify the salient features of a given image with more than 85% true positive rate. These results show the potential of this alternative computing method to develop a magnetic coprocessor that might solve complex problems in fewer clock cycles than traditional processors.

  10. Non-Boolean computing with nanomagnets for computer vision applications.

    PubMed

    Bhanja, Sanjukta; Karunaratne, D K; Panchumarthy, Ravi; Rajaram, Srinath; Sarkar, Sudeep

    2016-02-01

    The field of nanomagnetism has recently attracted tremendous attention as it can potentially deliver low-power, high-speed and dense non-volatile memories. It is now possible to engineer the size, shape, spacing, orientation and composition of sub-100 nm magnetic structures. This has spurred the exploration of nanomagnets for unconventional computing paradigms. Here, we harness the energy-minimization nature of nanomagnetic systems to solve the quadratic optimization problems that arise in computer vision applications, which are computationally expensive. By exploiting the magnetization states of nanomagnetic disks as state representations of a vortex and single domain, we develop a magnetic Hamiltonian and implement it in a magnetic system that can identify the salient features of a given image with more than 85% true positive rate. These results show the potential of this alternative computing method to develop a magnetic coprocessor that might solve complex problems in fewer clock cycles than traditional processors.

  11. 3D virtual human atria: A computational platform for studying clinical atrial fibrillation

    PubMed Central

    Aslanidi, Oleg V; Colman, Michael A; Stott, Jonathan; Dobrzynski, Halina; Boyett, Mark R; Holden, Arun V; Zhang, Henggui

    2011-01-01

    Despite a vast amount of experimental and clinical data on the underlying ionic, cellular and tissue substrates, the mechanisms of common atrial arrhythmias (such as atrial fibrillation, AF) arising from the functional interactions at the whole atria level remain unclear. Computational modelling provides a quantitative framework for integrating such multi-scale data and understanding the arrhythmogenic behaviour that emerges from the collective spatio-temporal dynamics in all parts of the heart. In this study, we have developed a multi-scale hierarchy of biophysically detailed computational models for the human atria – 3D virtual human atria. Primarily, diffusion tensor MRI reconstruction of the tissue geometry and fibre orientation in the human sinoatrial node (SAN) and surrounding atrial muscle was integrated into the 3D model of the whole atria dissected from the Visible Human dataset. The anatomical models were combined with the heterogeneous atrial action potential (AP) models, and used to simulate the AP conduction in the human atria under various conditions: SAN pacemaking and atrial activation in the normal rhythm, break-down of regular AP wave-fronts during rapid atrial pacing, and the genesis of multiple re-entrant wavelets characteristic of AF. Contributions of different properties of the tissue to the mechanisms of the normal rhythm and AF arrhythmogenesis are investigated and discussed. The 3D model of the atria itself was incorporated into the torso model to simulate the body surface ECG patterns in the normal and arrhythmic conditions. Therefore, a state-of-the-art computational platform has been developed, which can be used for studying multi-scale electrical phenomena during atrial conduction and arrhythmogenesis. Results of such simulations can be directly compared with experimental electrophysiological and endocardial mapping data, as well as clinical ECG recordings. More importantly, the virtual human atria can provide validated means for

  12. 3D virtual human atria: A computational platform for studying clinical atrial fibrillation.

    PubMed

    Aslanidi, Oleg V; Colman, Michael A; Stott, Jonathan; Dobrzynski, Halina; Boyett, Mark R; Holden, Arun V; Zhang, Henggui

    2011-10-01

    Despite a vast amount of experimental and clinical data on the underlying ionic, cellular and tissue substrates, the mechanisms of common atrial arrhythmias (such as atrial fibrillation, AF) arising from the functional interactions at the whole atria level remain unclear. Computational modelling provides a quantitative framework for integrating such multi-scale data and understanding the arrhythmogenic behaviour that emerges from the collective spatio-temporal dynamics in all parts of the heart. In this study, we have developed a multi-scale hierarchy of biophysically detailed computational models for the human atria--the 3D virtual human atria. Primarily, diffusion tensor MRI reconstruction of the tissue geometry and fibre orientation in the human sinoatrial node (SAN) and surrounding atrial muscle was integrated into the 3D model of the whole atria dissected from the Visible Human dataset. The anatomical models were combined with the heterogeneous atrial action potential (AP) models, and used to simulate the AP conduction in the human atria under various conditions: SAN pacemaking and atrial activation in the normal rhythm, break-down of regular AP wave-fronts during rapid atrial pacing, and the genesis of multiple re-entrant wavelets characteristic of AF. Contributions of different properties of the tissue to mechanisms of the normal rhythm and arrhythmogenesis were investigated. Primarily, the simulations showed that tissue heterogeneity caused the break-down of the normal AP wave-fronts at rapid pacing rates, which initiated a pair of re-entrant spiral waves; and tissue anisotropy resulted in a further break-down of the spiral waves into multiple meandering wavelets characteristic of AF. The 3D virtual atria model itself was incorporated into the torso model to simulate the body surface ECG patterns in the normal and arrhythmic conditions. Therefore, a state-of-the-art computational platform has been developed, which can be used for studying multi

  13. 3D modeling method for computer animate based on modified weak structured light method

    NASA Astrophysics Data System (ADS)

    Xiong, Hanwei; Pan, Ming; Zhang, Xiangwei

    2010-11-01

    A simple and affordable 3D scanner is designed in this paper. Three-dimensional digital models are playing an increasingly important role in many fields, such as computer animate, industrial design, artistic design and heritage conservation. For many complex shapes, optical measurement systems are indispensable to acquiring the 3D information. In the field of computer animate, such an optical measurement device is too expensive to be widely adopted, and on the other hand, the precision is not as critical a factor in that situation. In this paper, a new cheap 3D measurement system is implemented based on modified weak structured light, using only a video camera, a light source and a straight stick rotating on a fixed axis. For an ordinary weak structured light configuration, one or two reference planes are required, and the shadows on these planes must be tracked in the scanning process, which destroy the convenience of this method. In the modified system, reference planes are unnecessary, and size range of the scanned objects is expanded widely. A new calibration procedure is also realized for the proposed method, and points cloud is obtained by analyzing the shadow strips on the object. A two-stage ICP algorithm is used to merge the points cloud from different viewpoints to get a full description of the object, and after a series of operations, a NURBS surface model is generated in the end. A complex toy bear is used to verify the efficiency of the method, and errors range from 0.7783mm to 1.4326mm comparing with the ground truth measurement.

  14. "Let's get physical": advantages of a physical model over 3D computer models and textbooks in learning imaging anatomy.

    PubMed

    Preece, Daniel; Williams, Sarah B; Lam, Richard; Weller, Renate

    2013-01-01

    Three-dimensional (3D) information plays an important part in medical and veterinary education. Appreciating complex 3D spatial relationships requires a strong foundational understanding of anatomy and mental 3D visualization skills. Novel learning resources have been introduced to anatomy training to achieve this. Objective evaluation of their comparative efficacies remains scarce in the literature. This study developed and evaluated the use of a physical model in demonstrating the complex spatial relationships of the equine foot. It was hypothesized that the newly developed physical model would be more effective for students to learn magnetic resonance imaging (MRI) anatomy of the foot than textbooks or computer-based 3D models. Third year veterinary medicine students were randomly assigned to one of three teaching aid groups (physical model; textbooks; 3D computer model). The comparative efficacies of the three teaching aids were assessed through students' abilities to identify anatomical structures on MR images. Overall mean MRI assessment scores were significantly higher in students utilizing the physical model (86.39%) compared with students using textbooks (62.61%) and the 3D computer model (63.68%) (P < 0.001), with no significant difference between the textbook and 3D computer model groups (P = 0.685). Student feedback was also more positive in the physical model group compared with both the textbook and 3D computer model groups. Our results suggest that physical models may hold a significant advantage over alternative learning resources in enhancing visuospatial and 3D understanding of complex anatomical architecture, and that 3D computer models have significant limitations with regards to 3D learning. © 2013 American Association of Anatomists.

  15. Chapter 11. Quality evaluation of apple by computer vision

    USDA-ARS?s Scientific Manuscript database

    Apple is one of the most consumed fruits in the world, and there is a critical need for enhanced computer vision technology for quality assessment of apples. This chapter gives a comprehensive review on recent advances in various computer vision techniques for detecting surface and internal defects ...

  16. FaceWarehouse: a 3D facial expression database for visual computing.

    PubMed

    Cao, Chen; Weng, Yanlin; Zhou, Shun; Tong, Yiying; Zhou, Kun

    2014-03-01

    We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.

  17. High performance computing approaches for 3D reconstruction of complex biological specimens.

    PubMed

    da Silva, M Laura; Roca-Piera, Javier; Fernández, José-Jesús

    2010-01-01

    Knowledge of the structure of specimens is crucial to determine the role that they play in cellular and molecular biology. To yield the three-dimensional (3D) reconstruction by means of tomographic reconstruction algorithms, we need the use of large projection images and high processing time. Therefore, we propose the use of the high performance computing (HPC) to cope with the huge computational demands of this problem. We have implemented a HPC strategy where the distribution of tasks follows the master-slave paradigm. The master processor distributes a slab of slices, a piece of the final 3D structure to reconstruct, among the slave processors and receives reconstructed slices of the volume. We have evaluated the performance of our HPC approach using different sizes of the slab. We have observed that it is possible to find out an optimal size of the slab for the number of processor used that minimize communications time while maintaining a reasonable grain of parallelism to be exploited by the set of processors.

  18. Planned development of a 3D computer based on free-space optical interconnects

    NASA Astrophysics Data System (ADS)

    Neff, John A.; Guarino, David R.

    1994-05-01

    Free-space optical interconnection has the potential to provide upwards of a million data channels between planes of electronic circuits. This may result in the planar board and backplane structures of today giving away to 3-D stacks of wafers or multi-chip modules interconnected via channels running perpendicular to the processor planes, thereby eliminating much of the packaging overhead. Three-dimensional packaging is very appealing for tightly coupled fine-grained parallel computing where the need for massive numbers of interconnections is severely taxing the capabilities of the planar structures. This paper describes a coordinated effort by four research organizations to demonstrate an operational fine-grained parallel computer that achieves global connectivity through the use of free space optical interconnects.

  19. Applying 3D measurements and computer matching algorithms to two firearm examination proficiency tests.

    PubMed

    Ott, Daniel; Thompson, Robert; Song, Junfeng

    2017-02-01

    In order for a crime laboratory to assess a firearms examiner's training, skills, experience, and aptitude, it is necessary for the examiner to participate in proficiency testing. As computer algorithms for comparisons of pattern evidence become more prevalent, it is of interest to test algorithm performance as well, using these same proficiency examinations. This article demonstrates the use of the Congruent Matching Cell (CMC) algorithm to compare 3D topography measurements of breech face impressions and firing pin impressions from a previously distributed firearms proficiency test. In addition, the algorithm is used to analyze the distribution of many comparisons from a collection of cartridge cases used to construct another recent set of proficiency tests. These results are provided along with visualizations that help to relate the features used in optical comparisons by examiners to the features used by computer comparison algorithms.

  20. A new 3-D integral code for computation of accelerator magnets

    SciTech Connect

    Turner, L.R.; Kettunen, L.

    1991-01-01

    For computing accelerator magnets, integral codes have several advantages over finite element codes; far-field boundaries are treated automatically, and computed field in the bore region satisfy Maxwell's equations exactly. A new integral code employing edge elements rather than nodal elements has overcome the difficulties associated with earlier integral codes. By the use of field integrals (potential differences) as solution variables, the number of unknowns is reduced to one less than the number of nodes. Two examples, a hollow iron sphere and the dipole magnet of Advanced Photon Source injector synchrotron, show the capability of the code. The CPU time requirements are comparable to those of three-dimensional (3-D) finite-element codes. Experiments show that in practice it can realize much of the potential CPU time saving that parallel processing makes possible. 8 refs., 4 figs., 1 tab.

  1. Computational Unification: a Vision for Connecting Researchers

    NASA Astrophysics Data System (ADS)

    Troy, R. M.; Kingrey, O. J.

    2002-12-01

    Computational Unification of science, once only a vision, is becoming a reality. This technology is based upon a scientifically defensible, general solution for Earth Science data management and processing. The computational unification of science offers a real opportunity to foster inter and intra-discipline cooperation, and the end of 're-inventing the wheel'. As we move forward using computers as tools, it is past time to move from computationally isolating, "one-off" or discipline-specific solutions into a unified framework where research can be more easily shared, especially with researchers in other disciplines. The author will discuss how distributed meta-data, distributed processing and distributed data objects are structured to constitute a working interdisciplinary system, including how these resources lead to scientific defensibility through known lineage of all data products. Illustration of how scientific processes are encapsulated and executed illuminates how previously written processes and functions are integrated into the system efficiently and with minimal effort. Meta-data basics will illustrate how intricate relationships may easily be represented and used to good advantage. Retrieval techniques will be discussed including trade-offs of using meta-data versus embedded data, how the two may be integrated, and how simplifying assumptions may or may not help. This system is based upon the experience of the Sequoia 2000 and BigSur research projects at the University of California, Berkeley, whose goals were to find an alternative to the Hughes EOS-DIS system and is presently offered by Science Tools corporation, of which the author is a principal.

  2. Using Computer-Aided Design Software and 3D Printers to Improve Spatial Visualization

    ERIC Educational Resources Information Center

    Katsio-Loudis, Petros; Jones, Millie

    2015-01-01

    Many articles have been published on the use of 3D printing technology. From prefabricated homes and outdoor structures to human organs, 3D printing technology has found a niche in many fields, but especially education. With the introduction of AutoCAD technical drawing programs and now 3D printing, learners can use 3D printed models to develop…

  3. Using Computer-Aided Design Software and 3D Printers to Improve Spatial Visualization

    ERIC Educational Resources Information Center

    Katsio-Loudis, Petros; Jones, Millie

    2015-01-01

    Many articles have been published on the use of 3D printing technology. From prefabricated homes and outdoor structures to human organs, 3D printing technology has found a niche in many fields, but especially education. With the introduction of AutoCAD technical drawing programs and now 3D printing, learners can use 3D printed models to develop…

  4. Tangent Bundle Elastica and Computer Vision.

    PubMed

    Ben-Shahar, Ohad; Ben-Yosef, Guy

    2015-01-01

    Visual curve completion, an early visual process that completes the occluded parts between observed boundary fragments (a.k.a. inducers), is a major problem in perceptual organization and a critical step toward higher level visual tasks in both biological and machine vision. Most computational contributions to solving this problem suggest desired perceptual properties that the completed contour should satisfy in the image plane, and then seek the mathematical curves that provide them. Alternatively, few studies (including by the authors) have suggested to frame the problem not in the image plane but rather in the unit tangent bundleR (2) × S(1), the space that abstracts the primary visual cortex, where curve completion allegedly occurs. Combining both schools, here we propose and develop a biologically plausible theory of elastica in the tangent bundle that provides not only perceptually superior completion results but also a rigorous computational prediction that inducer curvatures greatly affects the shape of the completed curve, as indeed indicated by human perception.

  5. Fast 3D reconstruction of tool wear based on monocular vision and multi-color structured light illuminator

    NASA Astrophysics Data System (ADS)

    Wang, Zhongren; Li, Bo; Zhou, Yuebin

    2014-11-01

    Fast 3D reconstruction of tool wear from 2D images has great importance to 3D measuring and objective evaluating tool wear condition, determining accurate tool change and insuring machined part's quality. Extracting 3D information of tool wear zone based on monocular multi-color structured light can realize fast recovery of surface topography of tool wear, which overcomes the problems of traditional methods such as solution diversity and slow convergence when using SFS method and stereo match when using 3D reconstruction from multiple images. In this paper, a kind of new multi-color structured light illuminator was put forward. An information mapping model was established among illuminator's structure parameters, surface morphology and color images. The mathematical model to reconstruct 3D morphology based on monocular multi-color structured light was presented. Experimental results show that this method is effective and efficient to reconstruct the surface morphology of tool wear zone.

  6. Audio-Visual Perception of 3D Cinematography: An fMRI Study Using Condition-Based and Computation-Based Analyses

    PubMed Central

    Ogawa, Akitoshi; Bordier, Cecile; Macaluso, Emiliano

    2013-01-01

    The use of naturalistic stimuli to probe sensory functions in the human brain is gaining increasing interest. Previous imaging studies examined brain activity associated with the processing of cinematographic material using both standard “condition-based” designs, as well as “computational” methods based on the extraction of time-varying features of the stimuli (e.g. motion). Here, we exploited both approaches to investigate the neural correlates of complex visual and auditory spatial signals in cinematography. In the first experiment, the participants watched a piece of a commercial movie presented in four blocked conditions: 3D vision with surround sounds (3D-Surround), 3D with monaural sound (3D-Mono), 2D-Surround, and 2D-Mono. In the second experiment, they watched two different segments of the movie both presented continuously in 3D-Surround. The blocked presentation served for standard condition-based analyses, while all datasets were submitted to computation-based analyses. The latter assessed where activity co-varied with visual disparity signals and the complexity of auditory multi-sources signals. The blocked analyses associated 3D viewing with the activation of the dorsal and lateral occipital cortex and superior parietal lobule, while the surround sounds activated the superior and middle temporal gyri (S/MTG). The computation-based analyses revealed the effects of absolute disparity in dorsal occipital and posterior parietal cortices and of disparity gradients in the posterior middle temporal gyrus plus the inferior frontal gyrus. The complexity of the surround sounds was associated with activity in specific sub-regions of S/MTG, even after accounting for changes of sound intensity. These results demonstrate that the processing of naturalistic audio-visual signals entails an extensive set of visual and auditory areas, and that computation-based analyses can track the contribution of complex spatial aspects characterizing such life-like stimuli

  7. Application of 3-D computer graphics for facial reconstruction and comparison with sculpting techniques.

    PubMed

    Vanezis, P; Blowes, R W; Linney, A D; Tan, A C; Richards, R; Neave, R

    1989-07-01

    Facial reconstruction has until now been carried out by the sculpting technique. This method involves building a face with clay or other suitable material on to a skull or its cast, taking into account appropriate facial thickness measurements together with information provided by anthropologists such as approximate age, sex, race and other individual idiosyncrasies. A method for facial reconstruction is presented using 3-D computer graphics and is compared with the manual technique. The computer method involves initially digitising a skull using a laser scanner and video camera interfaced to a computer. A face, from a data bank which has previously digitised facial surfaces, is then placed over the skull in the form of a mask and the skin thickness is altered to conform with the underlying skull. The advantage of the computer method is its speed and flexibility. We have shown that the computer method for reconstructing a face is feasible and furthermore has the advantage over the manual technique of speed and flexibility. Nevertheless, the technique is far from perfect. Further facial thickness data needs collecting and the method requires evaluation using both known control skulls and later unknown remains.

  8. The Effects of 3D Computer Modelling on Conceptual Change about Seasons and Phases of the Moon

    ERIC Educational Resources Information Center

    Kucukozer, Huseyin

    2008-01-01

    In this study, prospective science teachers' misconceptions about the seasons and the phases of the Moon were determined, and then the effects of 3D computer modelling on their conceptual changes were investigated. The topics were covered in two classes with a total of 76 students using a predict-observe-explain strategy supported by 3D computer…

  9. The Effects of 3D Computer Modelling on Conceptual Change about Seasons and Phases of the Moon

    ERIC Educational Resources Information Center

    Kucukozer, Huseyin

    2008-01-01

    In this study, prospective science teachers' misconceptions about the seasons and the phases of the Moon were determined, and then the effects of 3D computer modelling on their conceptual changes were investigated. The topics were covered in two classes with a total of 76 students using a predict-observe-explain strategy supported by 3D computer…

  10. Computational chemistry approach to protein kinase recognition using 3D stochastic van der Waals spectral moments.

    PubMed

    González-Díaz, Humberto; Saíz-Urra, Liane; Molina, Reinaldo; González-Díaz, Yenny; Sánchez-González, Angeles

    2007-04-30

    Three-dimensional (3D) protein structures now frequently lack functional annotations because of the increase in the rate at which chemical structures are solved with respect to experimental knowledge of biological activity. As a result, predicting structure-function relationships for proteins is an active research field in computational chemistry and has implications in medicinal chemistry, biochemistry and proteomics. In previous studies stochastic spectral moments were used to predict protein stability or function (González-Díaz, H. et al. Bioorg Med Chem 2005, 13, 323; Biopolymers 2005, 77, 296). Nevertheless, these moments take into consideration only electrostatic interactions and ignore other important factors such as van der Waals interactions. The present study introduces a new class of 3D structure molecular descriptors for folded proteins named the stochastic van der Waals spectral moments ((o)beta(k)). Among many possible applications, recognition of kinases was selected due to the fact that previous computational chemistry studies in this area have not been reported, despite the widespread distribution of kinases. The best linear model found was Kact = -9.44 degrees beta(0)(c) +10.94 degrees beta(5)(c) -2.40 degrees beta(0)(i) + 2.45 degrees beta(5)(m) + 0.73, where core (c), inner (i) and middle (m) refer to specific spatial protein regions. The model with a high Matthew's regression coefficient (0.79) correctly classified 206 out of 230 proteins (89.6%) including both training and predicting series. An area under the ROC curve of 0.94 differentiates our model from a random classifier. A subsequent principal components analysis of 152 heterogeneous proteins demonstrated that beta(k) codifies information different to other descriptors used in protein computational chemistry studies. Finally, the model recognizes 110 out of 125 kinases (88.0%) in a virtual screening experiment and this can be considered as an additional validation study (these proteins

  11. A brain-computer interface method combined with eye tracking for 3D interaction.

    PubMed

    Lee, Eui Chul; Woo, Jin Cheol; Kim, Jong Hwa; Whang, Mincheol; Park, Kang Ryoung

    2010-07-15

    With the recent increase in the number of three-dimensional (3D) applications, the need for interfaces to these applications has increased. Although the eye tracking method has been widely used as an interaction interface for hand-disabled persons, this approach cannot be used for depth directional navigation. To solve this problem, we propose a new brain computer interface (BCI) method in which the BCI and eye tracking are combined to analyze depth navigation, including selection and two-dimensional (2D) gaze direction, respectively. The proposed method is novel in the following five ways compared to previous works. First, a device to measure both the gaze direction and an electroencephalogram (EEG) pattern is proposed with the sensors needed to measure the EEG attached to a head-mounted eye tracking device. Second, the reliability of the BCI interface is verified by demonstrating that there is no difference between the real and the imaginary movements for the same work in terms of the EEG power spectrum. Third, depth control for the 3D interaction interface is implemented by an imaginary arm reaching movement. Fourth, a selection method is implemented by an imaginary hand grabbing movement. Finally, for the independent operation of gazing and the BCI, a mode selection method is proposed that measures a user's concentration by analyzing the pupil accommodation speed, which is not affected by the operation of gazing and the BCI. According to experimental results, we confirmed the feasibility of the proposed 3D interaction method using eye tracking and a BCI. Copyright 2010 Elsevier B.V. All rights reserved.

  12. Soft computing approach to 3D lung nodule segmentation in CT.

    PubMed

    Badura, P; Pietka, E

    2014-10-01

    This paper presents a novel, multilevel approach to the segmentation of various types of pulmonary nodules in computed tomography studies. It is based on two branches of computational intelligence: the fuzzy connectedness (FC) and the evolutionary computation. First, the image and auxiliary data are prepared for the 3D FC analysis during the first stage of an algorithm - the masks generation. Its main goal is to process some specific types of nodules connected to the pleura or vessels. It consists of some basic image processing operations as well as dedicated routines for the specific cases of nodules. The evolutionary computation is performed on the image and seed points in order to shorten the FC analysis and improve its accuracy. After the FC application, the remaining vessels are removed during the postprocessing stage. The method has been validated using the first dataset of studies acquired and described by the Lung Image Database Consortium (LIDC) and by its latest release - the LIDC-IDRI (Image Database Resource Initiative) database.

  13. Cloud4Psi: cloud computing for 3D protein structure similarity searching.

    PubMed

    Mrozek, Dariusz; Małysiak-Mrozek, Bożena; Kłapciński, Artur

    2014-10-01

    Popular methods for 3D protein structure similarity searching, especially those that generate high-quality alignments such as Combinatorial Extension (CE) and Flexible structure Alignment by Chaining Aligned fragment pairs allowing Twists (FATCAT) are still time consuming. As a consequence, performing similarity searching against large repositories of structural data requires increased computational resources that are not always available. Cloud computing provides huge amounts of computational power that can be provisioned on a pay-as-you-go basis. We have developed the cloud-based system that allows scaling of the similarity searching process vertically and horizontally. Cloud4Psi (Cloud for Protein Similarity) was tested in the Microsoft Azure cloud environment and provided good, almost linearly proportional acceleration when scaled out onto many computational units. Cloud4Psi is available as Software as a Service for testing purposes at: http://cloud4psi.cloudapp.net/. For source code and software availability, please visit the Cloud4Psi project home page at http://zti.polsl.pl/dmrozek/science/cloud4psi.htm. © The Author 2014. Published by Oxford University Press.

  14. Computing Emissions from Active-Region Loops in 3D and High Resolution

    NASA Astrophysics Data System (ADS)

    Mok, Yung; Lionello, R.; Mikic, Z.; Linker, J.

    2009-05-01

    Plasma loops are widely observed in EUV and soft X-ray over active regions, but their thermal properties and formation mechanism have be controversial. In this work, we are able to reproduce some of the loop properties by forward modeling. Using an MDI magnetogram, we constructed a mildly sheared force-free magnetic field based on parameters deduced from observation. The field was computed in unusually high spatial resolution in order to resolve the expected thin coronal loops. Although the magnetogram has fine structures at the photospheric level, the field in the corona is smooth as expected. The field lines have moderately complex connectivity. We then chose a specific heating model and computed the thermal structure in 3D. Although the overall temperature profile has only moderate spatial variations in the corona, the computed line-of-sight integrated EUV emissions show a complex system of thin plasma loops. Initial analysis shows that thermal instability leads to the time variation of the loop brightness. The lack of cross-section expansion is also apparent. The location of the loops and their relationship with the magnetic field will also be discussed. Work supported by HTP of NASA. Computation resources provided by NAS at Ames Research Center, NASA.

  15. Computational time analysis of the numerical solution of 3D electrostatic Poisson's equation

    NASA Astrophysics Data System (ADS)

    Kamboh, Shakeel Ahmed; Labadin, Jane; Rigit, Andrew Ragai Henri; Ling, Tech Chaw; Amur, Khuda Bux; Chaudhary, Muhammad Tayyab

    2015-05-01

    3D Poisson's equation is solved numerically to simulate the electric potential in a prototype design of electrohydrodynamic (EHD) ion-drag micropump. Finite difference method (FDM) is employed to discretize the governing equation. The system of linear equations resulting from FDM is solved iteratively by using the sequential Jacobi (SJ) and sequential Gauss-Seidel (SGS) methods, simulation results are also compared to examine the difference between the results. The main objective was to analyze the computational time required by both the methods with respect to different grid sizes and parallelize the Jacobi method to reduce the computational time. In common, the SGS method is faster than the SJ method but the data parallelism of Jacobi method may produce good speedup over SGS method. In this study, the feasibility of using parallel Jacobi (PJ) method is attempted in relation to SGS method. MATLAB Parallel/Distributed computing environment is used and a parallel code for SJ method is implemented. It was found that for small grid size the SGS method remains dominant over SJ method and PJ method while for large grid size both the sequential methods may take nearly too much processing time to converge. Yet, the PJ method reduces computational time to some extent for large grid sizes.

  16. Efficacy of computer-assisted, 3D motion-capture toothbrushing instruction.

    PubMed

    Kim, Kee-Deog; Jeong, Jin-Sun; Lee, Hae Na; Gu, Yu; Kim, Kyeong-Seop; Lee, Jeong-Whan; Park, Wonse

    2015-07-01

    The objective of this study was to compare the efficacy of computer-assisted TBI using a smart toothbrush (ST) and smart mirror (SM) in plaque control to that of conventional TBI. We evaluated the plaque removal efficacy of a ST comprising a computer-assisted, wirelessly linked, three-dimensional (3D) motion-capture, data-logging, and SM system in TBI. We also evaluated the efficacy of TBI with a ST and SM system by analyzing the reductions of the modified Quigley-Hein plaque index in 60 volunteers. These volunteers were separated randomly into two groups: conventional TBI (control group) and computer-assisted TBI (experimental group). The changes in the plaque indexes were recorded immediately, 1 week, 1 month, and 10 months after TBI. The patterns of decreases in the modified Quigley-Hein plaque indexes were similar in the two groups. Reductions of the plaque indexes of both groups in each time period were observed (P < 0.0001), and the effects of TBI did not differ between the two groups (P = 0.3803). All volunteers were sufficiently motivated in using this new system. The reported new, computer-assisted TBI system might be an alternative option in controlling dental plaque and maintaining oral hygiene. Individuals can be motivated by the new system; meanwhile, comparable effects of controlling dental plaque can be achieved.

  17. On the performances of computer vision algorithms on mobile platforms

    NASA Astrophysics Data System (ADS)

    Battiato, S.; Farinella, G. M.; Messina, E.; Puglisi, G.; Ravì, D.; Capra, A.; Tomaselli, V.

    2012-01-01

    Computer Vision enables mobile devices to extract the meaning of the observed scene from the information acquired with the onboard sensor cameras. Nowadays, there is a growing interest in Computer Vision algorithms able to work on mobile platform (e.g., phone camera, point-and-shot-camera, etc.). Indeed, bringing Computer Vision capabilities on mobile devices open new opportunities in different application contexts. The implementation of vision algorithms on mobile devices is still a challenging task since these devices have poor image sensors and optics as well as limited processing power. In this paper we have considered different algorithms covering classic Computer Vision tasks: keypoint extraction, face detection, image segmentation. Several tests have been done to compare the performances of the involved mobile platforms: Nokia N900, LG Optimus One, Samsung Galaxy SII.

  18. Optimization of the aperture and the transducer characteristics of a 3D ultrasound computer tomography system

    NASA Astrophysics Data System (ADS)

    Ruiter, Nicole V.; Zapf, Michael; Hopp, Torsten; Dapp, Robin; Gemmeke, Hartmut

    2014-03-01

    A promising candidate for improved imaging of breast cancer is ultrasound computer tomography (USCT). The aim of this work was to design a new aperture for our full 3D USCT which extends the properties of the current aperture to a larger ROI fitting the buoyant breast in water and decreasing artifacts in transmission tomography. The optimization resulted in a larger opening angle of the transducers, a larger diameter of the aperture and an approximately homogeneous distribution of the transducers, with locally random distances. The developed optimization methods allow us to automatically generate an optimized aperture for given diameters of apertures and transducer arrays, as well as quantitative comparison to other arbitrary apertures. Thus, during the design phase of the next generation KIT 3D USCT, the image quality can be balanced against the specification parameters and given hardware and cost limitations. The methods can be applied for general aperture optimization, only limited by the assumptions of a hemispherical aperture and circular transducer arrays.

  19. Computational Study of 3-D Hot-Spot Initiation in Shocked Insensitive High-Explosive

    NASA Astrophysics Data System (ADS)

    Najjar, F. M.; Howard, W. M.; Fried, L. E.

    2011-06-01

    High explosive shock sensitivity is controlled by a combination of mechanical response, thermal properties, and chemical properties. The interplay of these physical phenomena in realistic condensed energetic materials is currently lacking. A multiscale computational framework is developed investigating hot spot (void) ignition in a single crystal of an insensitive HE, TATB. Atomistic MD simulations are performed to provide the key chemical reactions and these reaction rates are used in 3-D multiphysics simulations. The multiphysics code, ALE3D, is linked to the chemistry software, Cheetah, and a three-way coupled approach is pursued including hydrodynamics, thermal and chemical analyses. A single spherical air bubble is embedded in the insensitive HE and its collapse due to shock initiation is evolved numerically in time; while the ignition processes due chemical reactions are studied. Our current predictions showcase several interesting features regarding hot spot dynamics including the formation of a ``secondary'' jet. Results obtained with hydro-thermo-chemical processes leading to ignition growth will be discussed for various pore sizes and different shock pressures. LLNL-ABS-471438. This work performed under the auspices of the U.S. Department of Energy by LLNL under Contract DE-AC52-07NA27344.

  20. Breast density measurement: 3D cone beam computed tomography (CBCT) images versus 2D digital mammograms

    NASA Astrophysics Data System (ADS)

    Han, Tao; Lai, Chao-Jen; Chen, Lingyun; Liu, Xinming; Shen, Youtao; Zhong, Yuncheng; Ge, Shuaiping; Yi, Ying; Wang, Tianpeng; Yang, Wei T.; Shaw, Chris C.

    2009-02-01

    Breast density has been recognized as one of the major risk factors for breast cancer. However, breast density is currently estimated using mammograms which are intrinsically 2D in nature and cannot accurately represent the real breast anatomy. In this study, a novel technique for measuring breast density based on the segmentation of 3D cone beam CT (CBCT) images was developed and the results were compared to those obtained from 2D digital mammograms. 16 mastectomy breast specimens were imaged with a bench top flat-panel based CBCT system. The reconstructed 3D CT images were corrected for the cupping artifacts and then filtered to reduce the noise level, followed by using threshold-based segmentation to separate the dense tissue from the adipose tissue. For each breast specimen, volumes of the dense tissue structures and the entire breast were computed and used to calculate the volumetric breast density. BI-RADS categories were derived from the measured breast densities and compared with those estimated from conventional digital mammograms. The results show that in 10 of 16 cases the BI-RADS categories derived from the CBCT images were lower than those derived from the mammograms by one category. Thus, breasts considered as dense in mammographic examinations may not be considered as dense with the CBCT images. This result indicates that the relation between breast cancer risk and true (volumetric) breast density needs to be further investigated.

  1. An improved version of NCOREL: A computer program for 3-D nonlinear supersonic potential flow computations

    NASA Technical Reports Server (NTRS)

    Siclari, Michael J.

    1988-01-01

    A computer code called NCOREL (for Nonconical Relaxation) has been developed to solve for supersonic full potential flows over complex geometries. The method first solves for the conical at the apex and then marches downstream in a spherical coordinate system. Implicit relaxation techniques are used to numerically solve the full potential equation at each subsequent crossflow plane. Many improvements have been made to the original code including more reliable numerics for computing wing-body flows with multiple embedded shocks, inlet flow through simulation, wake model and entropy corrections. Line relaxation or approximate factorization schemes are optionally available. Improved internal grid generation using analytic conformal mappings, supported by a simple geometric Harris wave drag input that was originally developed for panel methods and internal geometry package are some of the new features.

  2. The NCOREL computer program for 3D nonlinear supersonic potential flow computations

    NASA Technical Reports Server (NTRS)

    Siclari, M. J.

    1983-01-01

    An innovative computational technique (NCOREL) was established for the treatment of three dimensional supersonic flows. The method is nonlinear in that it solves the nonconservative finite difference analog of the full potential equation and can predict the formation of supercritical cross flow regions, embedded and bow shocks. The method implicitly computes a conical flow at the apex (R = 0) of a spherical coordinate system and uses a fully implicit marching technique to obtain three dimensional cross flow solutions. This implies that the radial Mach number must remain supersonic. The cross flow solutions are obtained by using type dependent transonic relaxation techniques with the type dependency linked to the character of the cross flow velocity (i.e., subsonic/supersonic). The spherical coordinate system and marching on spherical surfaces is ideally suited to the computation of wing flows at low supersonic Mach numbers due to the elimination of the subsonic axial Mach number problems that exist in other marching codes that utilize Cartesian transverse marching planes.

  3. A 3-D Computational Study of a Variable Camber Continuous Trailing Edge Flap (VCCTEF) Spanwise Segment

    NASA Technical Reports Server (NTRS)

    Kaul, Upender K.; Nguyen, Nhan T.

    2015-01-01

    Results of a computational study carried out to explore the effects of various elastomer configurations joining spanwise contiguous Variable Camber Continuous Trailing Edge Flap (VCCTEF) segments are reported here. This research is carried out as a proof-of-concept study that will seek to push the flight envelope in cruise with drag optimization as the objective. The cruise conditions can be well off design such as caused by environmental conditions, maneuvering, etc. To handle these off-design conditions, flap deflection is used so when the flap is deflected in a given direction, the aircraft angle of attack changes accordingly to maintain a given lift. The angle of attack is also a design parameter along with the flap deflection. In a previous 2D study,1 the effect of camber was investigated and the results revealed some insight into the relative merit of various camber settings of the VCCTEF. The present state of the art has not advanced sufficiently to do a full 3-D viscous analysis of the whole NASA Generic Transport Model (GTM) wing with VCCTEF deployed with elastomers. Therefore, this study seeks to explore the local effects of three contiguous flap segments on lift and drag of a model devised here to determine possible trades among various flap deflections to achieve desired lift and drag results. Although this approach is an approximation, it provides new insights into the "local" effects of the relative deflections of the contiguous spanwise flap systems and various elastomer segment configurations. The present study is a natural extension of the 2-D study to assess these local 3-D effects. Design cruise condition at 36,000 feet at free stream Mach number of 0.797 and a mean aerodynamic chord (MAC) based Reynolds number of 30.734x10(exp 6) is simulated for an angle of attack (AoA) range of 0 to 6 deg. In the previous 2-D study, the calculations revealed that the parabolic arc camber (1x2x3) and circular arc camber (VCCTEF222) offered the best L

  4. Intelligent Computer Vision System for Automated Classification

    NASA Astrophysics Data System (ADS)

    Jordanov, Ivan; Georgieva, Antoniya

    2010-05-01

    In this paper we investigate an Intelligent Computer Vision System applied for recognition and classification of commercially available cork tiles. The system is capable of acquiring and processing gray images using several feature generation and analysis techniques. Its functionality includes image acquisition, feature extraction and preprocessing, and feature classification with neural networks (NN). We also discuss system test and validation results from the recognition and classification tasks. The system investigation also includes statistical feature processing (features number and dimensionality reduction techniques) and classifier design (NN architecture, target coding, learning complexity and performance, and training with our own metaheuristic optimization method). The NNs trained with our genetic low-discrepancy search method (GLPτS) for global optimisation demonstrated very good generalisation abilities. In our view, the reported testing success rate of up to 95% is due to several factors: combination of feature generation techniques; application of Analysis of Variance (ANOVA) and Principal Component Analysis (PCA), which appeared to be very efficient for preprocessing the data; and use of suitable NN design and learning method.

  5. Computer vision for driver assistance systems

    NASA Astrophysics Data System (ADS)

    Handmann, Uwe; Kalinke, Thomas; Tzomakas, Christos; Werner, Martin; von Seelen, Werner

    1998-07-01

    Systems for automated image analysis are useful for a variety of tasks and their importance is still increasing due to technological advances and an increase of social acceptance. Especially in the field of driver assistance systems the progress in science has reached a level of high performance. Fully or partly autonomously guided vehicles, particularly for road-based traffic, pose high demands on the development of reliable algorithms due to the conditions imposed by natural environments. At the Institut fur Neuroinformatik, methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We introduce a system which extracts the important information from an image taken by a CCD camera installed at the rear view mirror in a car. The approach consists of a sequential and a parallel sensor and information processing. Three main tasks namely the initial segmentation (object detection), the object tracking and the object classification are realized by integration in the sequential branch and by fusion in the parallel branch. The main gain of this approach is given by the integrative coupling of different algorithms providing partly redundant information.

  6. Intelligent Computer Vision System for Automated Classification

    SciTech Connect

    Jordanov, Ivan; Georgieva, Antoniya

    2010-05-21

    In this paper we investigate an Intelligent Computer Vision System applied for recognition and classification of commercially available cork tiles. The system is capable of acquiring and processing gray images using several feature generation and analysis techniques. Its functionality includes image acquisition, feature extraction and preprocessing, and feature classification with neural networks (NN). We also discuss system test and validation results from the recognition and classification tasks. The system investigation also includes statistical feature processing (features number and dimensionality reduction techniques) and classifier design (NN architecture, target coding, learning complexity and performance, and training with our own metaheuristic optimization method). The NNs trained with our genetic low-discrepancy search method (GLPtauS) for global optimisation demonstrated very good generalisation abilities. In our view, the reported testing success rate of up to 95% is due to several factors: combination of feature generation techniques; application of Analysis of Variance (ANOVA) and Principal Component Analysis (PCA), which appeared to be very efficient for preprocessing the data; and use of suitable NN design and learning method.

  7. Computer vision for yarn microtension measurement.

    PubMed

    Wang, Qing; Lu, Changhou; Huang, Ran; Pan, Wei; Li, Xueyong

    2016-03-20

    Yarn tension is an important parameter for assuring textile quality. In this paper, an optical method to measure microtension of moving yarn automatically in the winding system is proposed. The proposed method can measure microtension of the moving yarn by analyzing the captured images. With a line laser illuminating the moving yarn, a linear array CCD camera is used to capture the images. Design principles of yarn microtension measuring equipment based on computer vision are presented. A local border difference algorithm is used to search the upper border of the moving yarn as the characteristic line, and Fourier descriptors are used to filter the high-frequency noises caused by unevenness of the yarn diameter. Based on the average value of the characteristic line, the captured images were classified into sagging images and vibration images. The average value is considered a sag coordinate of the sagging images. The peak and trough coordinates of the vibration are obtained by change-point detection. Then, according to axially moving string and catenary theory, we obtain the microtension of the moving yarn. Experiments were performed and compared with a resistance strain sensor, and the results prove that the proposed method is effective and of high accuracy.

  8. Roughness receptivity studies in a 3-D boundary layer - Flight tests and computations

    NASA Astrophysics Data System (ADS)

    Carpenter, Andrew L.; Saric, William S.; Reed, Helen L.

    The receptivity of 3-D boundary layers to micron-sized, spanwise-periodic Discrete Roughness Elements (DREs) was studied. The DREs were applied to the leading edge of a 30-degree swept-wing at the wavelength of the most unstable disturbance. In this case, calibrated, multi-element hotfilm sensors were used to measure disturbance wall shear stress. The roughness height was varied from 0 to 50 microns. Thus, the disturbance-shear-stress amplitude variations were determined as a function of modulated DRE heights. The computational work was conducted parallel to the flight experiments. The complete viscous flowfield over the O-2 aircraft with the SWIFT model mounted on the port wing store pylon was successfully modeled and validated with the flight data. This highly accurate basic-state solution was incorporated into linear stability calculations and the wave growth associated with the crossflow instability was calculated.

  9. Fast and Robust Sixth Order Multigrid Computation for 3D Convection Diffusion Equation

    PubMed Central

    Wang, Yin; Zhang, Jun

    2010-01-01

    We present a sixth order explicit compact finite difference scheme to solve the three dimensional (3D) convection diffusion equation. We first use multiscale multigrid method to solve the linear systems arising from a 19-point fourth order discretization scheme to compute the fourth order solutions on both the coarse grid and the fine grid. Then an operator based interpolation scheme combined with an extrapolation technique is used to approximate the sixth order accurate solution on the fine grid. Since the multigrid method using a standard point relaxation smoother may fail to achieve the optimal grid independent convergence rate for solving convection diffusion equation with a high Reynolds number, we implement the plane relaxation smoother in the multigrid solver to achieve better grid independency. Supporting numerical results are presented to demonstrate the efficiency and accuracy of the sixth order compact scheme (SOC), compared with the previously published fourth order compact scheme (FOC). PMID:21151737

  10. Inspecting wood surface roughness using computer vision

    NASA Astrophysics Data System (ADS)

    Zhao, Xuezeng

    1995-01-01

    Wood surface roughness is one of the important indexes of manufactured wood products. This paper presents an attempt to develop a new method to evaluate manufactured wood surface roughness through the utilization of imaging processing and pattern recognition techniques. In this paper a collimated plane of light or a laser is directed onto the inspected wood surface at a sharp angle of incidence. An optics system that consists of lens focuses the image of the surface onto the objective of a CCD camera, the CCD camera captures the image of the surface and using a CA6300 board digitizes the image. The digitized image is transmitted into a microcomputer. Through the use of the methodology presented in this paper, the computer filters the noise and wood anatomical grain and gives an evaluation of the nature of the manufactured wood surface. The preliminary results indicated that the method has the advantages of non-contact, 3D, high-speed. This method can be used in classification and in- time measurement of manufactured wood products.

  11. Computer vision guided virtual craniofacial reconstruction.

    PubMed

    Bhandarkar, Suchendra M; Chowdhury, Ananda S; Tang, Yarong; Yu, Jack C; Tollner, Ernest W

    2007-09-01

    The problem of virtual craniofacial reconstruction from a sequence of computed tomography (CT) images is addressed and is modeled as a rigid surface registration problem. Two different classes of surface matching algorithms, namely the data aligned rigidity constrained exhaustive search (DARCES) algorithm and the iterative closest point (ICP) algorithm are first used in isolation. Since the human bone can be reasonably approximated as a rigid body, 3D rigid surface registration techniques such as the DARCES and ICP algorithms are deemed to be well suited for the purpose of aligning the fractured bone fragments. A synergistic combination of these two algorithms, termed as the hybrid DARCES-ICP algorithm, is proposed. The hybrid algorithm is shown to result in a more accurate mandibular reconstruction when compared to the individual algorithms used in isolation. The proposed scheme for virtual reconstructive surgery would prove to be of tremendous benefit to the operating surgeons as it would allow them to pre-visualize the reconstructed mandible (i.e., the end-product of their work), before performing the actual surgical procedure. Experimental results on both phantom and real (human) patient datasets are presented.

  12. Development of complex 3D microstructures based on computer generated holography and their usage for biomedical applications

    NASA Astrophysics Data System (ADS)

    Palevicius, Arvydas; Grigaliunas, Viktoras; Janusas, Giedrius; Palevicius, Paulius; Sakalys, Rokas

    2016-04-01

    The main focus of the paper is the development of technological route of the production of complex 3D microstructure, from designing it by the method of computer generated holography till its physical 3D patterning by exploiting the process of electron beam lithography and thermal replication which is used for biomedical application. A phase data of a complex 3D microstructure was generated by using Gerchberg-Saxton algorithm which later was used to produce a computer generated hologram. Physical implementation of microstructure was done using a single layer polymethyl methacrylate (PMMA) as a basis for 3D microstructure, which was exposed using e-beam lithography system e-Line and replicated, using high frequency vibration. Manufactured 3D microstructure is used for designing micro sensor for biomedical applications.

  13. Computation of a high-resolution MRI 3D stereotaxic atlas of the sheep brain.

    PubMed

    Ella, Arsène; Delgadillo, José A; Chemineau, Philippe; Keller, Matthieu

    2017-02-15

    The sheep model was first used in the fields of animal reproduction and veterinary sciences and then was utilized in fundamental and preclinical studies. For more than a decade, magnetic resonance (MR) studies performed on this model have been increasingly reported, especially in the field of neuroscience. To contribute to MR translational neuroscience research, a brain template and an atlas are necessary. We have recently generated the first complete T1-weighted (T1W) and T2W MR population average images (or templates) of in vivo sheep brains. In this study, we 1) defined a 3D stereotaxic coordinate system for previously established in vivo population average templates; 2) used deformation fields obtained during optimized nonlinear registrations to compute nonlinear tissues or prior probability maps (nlTPMs) of cerebrospinal fluid (CSF), gray matter (GM), and white matter (WM) tissues; 3) delineated 25 external and 28 internal sheep brain structures by segmenting both templates and nlTPMs; and 4) annotated and labeled these structures using an existing histological atlas. We built a quality high-resolution 3D atlas of average in vivo sheep brains linked to a reference stereotaxic space. The atlas and nlTPMs, associated with previously computed T1W and T2W in vivo sheep brain templates and nlTPMs, provide a complete set of imaging space that are able to be imported into other imaging software programs and could be used as standardized tools for neuroimaging studies or other neuroscience methods, such as image registration, image segmentation, identification of brain structures, implementation of recording devices, or neuronavigation. J. Comp. Neurol. 525:676-692, 2017. © 2016 Wiley Periodicals, Inc.

  14. Ceramic scaffolds produced by computer-assisted 3D printing and sintering: characterization and biocompatibility investigations.

    PubMed

    Warnke, Patrick H; Seitz, Hermann; Warnke, Frauke; Becker, Stephan T; Sivananthan, Sureshan; Sherry, Eugene; Liu, Qin; Wiltfang, Jörg; Douglas, Timothy

    2010-04-01

    Hydroxyapatite (HAP) and tricalcium phosphate (TCP) are two very common ceramic materials for bone replacement. However, in general HAP and TCP scaffolds are not tailored to the exact dimensions of the defect site and are mainly used as granules or beads. Some scaffolds are available as ordinary blocks, but cannot be customized for individual perfect fit. Using computer-assisted 3D printing, an emerging rapid prototyping technique, individual three-dimensional ceramic scaffolds can be built up from TCP or HAP powder layer by layer with subsequent sintering. These scaffolds have precise dimensions and highly defined and regular internal characteristics such as pore size. External shape and internal characteristics such as pore size can be fabricated using Computer Assisted Design (CAD) based on individual patient data. Thus, these scaffolds could be designed as perfect fit replacements to reconstruct the patient's skeleton. Before their use as bone replacement materials in vivo, in vitro testing of these scaffolds is necessary. In this study, the behavior of human osteoblasts on HAP and TCP scaffolds was investigated. The commonly used bone replacement material BioOss(R) served as control. Biocompatibility was assessed by scanning electron microscopy (SEM), fluorescence microscopy after staining for cell vitality with fluorescin diacetate (FDA) and propidium iodide (PI) and the MTT, LDH, and WST biocompatibility tests. Both versions were colonised by human osteoblasts, however more cells were seen on HAP scaffolds than TCP scaffolds. Cell vitality staining and MTT, LDH, and WST tests showed superior biocompatibility of HAP scaffolds to BioOss, while BioOss was more compatible than TCP. Further experiments are necessary to determine biocompatibility in vivo. Future modifications of 3D printed scaffolds offer advantageous features for Tissue Engineering. The integration of channels could allow for vascular and nerve ingrowth into the scaffold. Also the complex shapes

  15. Analysis of bite marks in foodstuffs by computer tomography (cone beam CT)--3D reconstruction.

    PubMed

    Marques, Jeidson; Musse, Jamilly; Caetano, Catarina; Corte-Real, Francisco; Corte-Real, Ana Teresa

    2013-12-01

    The use of three-dimensional (3D) analysis of forensic evidence is highlighted in comparison with traditional methods. This three-dimensional analysis is based on the registration of the surface from a bitten object. The authors propose to use Cone Beam Computed Tomography (CBCT), which is used in dental practice, in order to study the surface and interior of bitten objects and dental casts of suspects. In this study, CBCT is applied to the analysis of bite marks in foodstuffs, which may be found in a forensic case scenario. 6 different types of foodstuffs were used: chocolate, cheese, apple, chewing gum, pizza and tart (flaky pastry and custard). The food was bitten into and dental casts of the possible suspects were made. The dental casts and bitten objects were registered using an x-ray source and the CBCT equipment iCAT® (Pennsylvania, EUA). The software InVivo5® (Anatomage Inc, EUA) was used to visualize and analyze the tomographic slices and 3D reconstructions of the objects. For each material an estimate of its density was assessed by two methods: HU values and specific gravity. All the used materials were successfully reconstructed as good quality 3D images. The relative densities of the materials in study were compared. Amongst the foodstuffs, the chocolate had the highest density (median value 100.5 HU and 1,36 g/cm(3)), while the pizza showed to have the lowest (median value -775 HU and 0,39 g/cm(3)), on both scales. Through tomographic slices and three-dimensional reconstructions it was possible to perform the metric analysis of the bite marks in all the foodstuffs, except for the pizza. These measurements could also be obtained from the dental casts. The depth of the bite mark was also successfully determined in all the foodstuffs except for the pizza. Cone Beam Computed Tomography has the potential to become an important tool for forensic sciences, namely for the registration and analysis of bite marks in foodstuffs that may be found in a crime

  16. Interaction of 3d transition metal atoms with charged ion projectiles from Electron Nuclear Dynamics computation

    NASA Astrophysics Data System (ADS)

    Hagelberg, Frank

    2003-03-01

    Computational results on atomic scattering between charged projectiles and transition metal target atoms are presented. This work aims at obtaining detailed information about charge, spin and energy transfer processes that occur between the interacting particles. An in-depth understanding of these phenomena is expected to provide a theoretical basis for the interpretation of various types of ion beam experiments, ranging from gas phase chromatography to spectroscopic observations of fast ions in ferromagnetic media. This contribution focuses on the scattering of light projectiles ranging from He to O, that are prepared in various initial charge states, by 3d transition metal atoms. The presented computations are performed in the framework of Electron Nuclear Dynamics (END)^1 theory which incorporates the coupling between electronic and nuclear degrees of freedom without reliance on the computationally cumbersome and frequently intractable determination of potential energy surfaces. In the present application of END theory to ion - transition metal atom scattering, a supermolecule approach is utilized in conjunction with a spin-unrestricted single determinantal wave function describing the electronic system. Integral scattering, charge and spin exchange cross sections are discussed as functions of the elementary parameters of the problem, such as projectile and target atomic numbers as well as projectile charge and initial kinetic energy. ^1 E.Deumens, A.Diz, R.Longo, Y.Oehrn, Rev.Mod.Phys. 66, 917 (1994)

  17. Enabling 3D-Liver Perfusion Mapping from MR-DCE Imaging Using Distributed Computing.

    PubMed

    Leporq, Benjamin; Camarasu-Pop, Sorina; Davila-Serrano, Eduardo E; Pilleul, Frank; Beuf, Olivier

    2013-01-01

    An MR acquisition protocol and a processing method using distributed computing on the European Grid Infrastructure (EGI) to allow 3D liver perfusion parametric mapping after Magnetic Resonance Dynamic Contrast Enhanced (MR-DCE) imaging are presented. Seven patients (one healthy control and six with chronic liver diseases) were prospectively enrolled after liver biopsy. MR-dynamic acquisition was continuously performed in free-breathing during two minutes after simultaneous intravascular contrast agent (MS-325 blood pool agent) injection. Hepatic capillary system was modeled by a 3-parameters one-compartment pharmacokinetic model. The processing step was parallelized and executed on the EGI. It was modeled and implemented as a grid workflow using the Gwendia language and the MOTEUR workflow engine. Results showed good reproducibility in repeated processing on the grid. The results obtained from the grid were well correlated with ROI-based reference method ran locally on a personal computer. The speed-up range was 71 to 242 with an average value of 126. In conclusion, distributed computing applied to perfusion mapping brings significant speed-up to quantification step to be used for further clinical studies in a research context. Accuracy would be improved with higher image SNR accessible on the latest 3T MR systems available today.

  18. A computational model of perceptual grouping and 3D surface completion in the mime effect.

    PubMed

    Mtibaa, Riadh; Idesawa, Masanori; Sakaguchi, Yutaka; Ishida, Fumihiko

    2008-09-01

    We propose a computational model of perceptual grouping for explaining the 3D shape representation of an illusory percept called "mime effect." This effect is associated with the generation of an illusory, volumetric perception that can be induced by particular distributions of inducing stimuli such as cones, whose orientations affect the stability of illusory perception. The authors have attempted to explain the characteristics of the shape representation of the mime effect using a neural network model that consists of four types of cells-encoding (E), normalizing (N), energetic (EN), and geometric (G) cells. E cells represent both the positions and orientations of inducing stimuli and the mime-effect shape, and N cells regulate the activity of E cells. The interactions of E cells generate dynamics whose mode indicates the stability of illusory perception; a stable dynamics mode indicates a stable perception, whereas a chaotic dynamics mode indicates an unstable perception. EN cells compute the Liapunov energetic exponent (LEE) from an energy function of the system of E cells. The stable and chaotic dynamics modes are identified by strictly negative and strictly positive values of LEE, respectively. In case of stability, G cells perform a particular surface interpolation for completing the mime effect shape. The authors confirm the model behaviour by means of computer-simulated experiments. The relation between the model behaviour and the shape representation in the human brain is also discussed.

  19. FURN3D: A computer code for radiative heat transfer in pulverized coal furnaces

    SciTech Connect

    Ahluwalia, R.K.; Im, K.H.

    1992-08-01

    A computer code FURN3D has been developed for assessing the impact of burning different coals on heat absorption pattern in pulverized coal furnaces. The code is unique in its ability to conduct detailed spectral calculations of radiation transport in furnaces fully accounting for the size distributions of char, soot and ash particles, ash content, and ash composition. The code uses a hybrid technique of solving the three-dimensional radiation transport equation for absorbing, emitting and anisotropically scattering media. The technique achieves an optimal mix of computational speed and accuracy by combining the discrete ordinate method (S[sub 4]), modified differential approximation (MDA) and P, approximation in different range of optical thicknesses. The code uses spectroscopic data for estimating the absorption coefficients of participating gases C0[sub 2], H[sub 2]0 and CO. It invokes Mie theory for determining the extinction and scattering coefficients of combustion particulates. The optical constants of char, soot and ash are obtained from dispersion relations derived from reflectivity, transmissivity and extinction measurements. A control-volume formulation is adopted for determining the temperature field inside the furnace. A simple char burnout model is employed for estimating heat release and evolution of particle size distribution. The code is written in Fortran 77, has modular form, and is machine-independent. The computer memory required by the code depends upon the number of grid points specified and whether the transport calculations are performed on spectral or gray basis.

  20. FURN3D: A computer code for radiative heat transfer in pulverized coal furnaces

    SciTech Connect

    Ahluwalia, R.K.; Im, K.H.

    1992-08-01

    A computer code FURN3D has been developed for assessing the impact of burning different coals on heat absorption pattern in pulverized coal furnaces. The code is unique in its ability to conduct detailed spectral calculations of radiation transport in furnaces fully accounting for the size distributions of char, soot and ash particles, ash content, and ash composition. The code uses a hybrid technique of solving the three-dimensional radiation transport equation for absorbing, emitting and anisotropically scattering media. The technique achieves an optimal mix of computational speed and accuracy by combining the discrete ordinate method (S{sub 4}), modified differential approximation (MDA) and P, approximation in different range of optical thicknesses. The code uses spectroscopic data for estimating the absorption coefficients of participating gases C0{sub 2}, H{sub 2}0 and CO. It invokes Mie theory for determining the extinction and scattering coefficients of combustion particulates. The optical constants of char, soot and ash are obtained from dispersion relations derived from reflectivity, transmissivity and extinction measurements. A control-volume formulation is adopted for determining the temperature field inside the furnace. A simple char burnout model is employed for estimating heat release and evolution of particle size distribution. The code is written in Fortran 77, has modular form, and is machine-independent. The computer memory required by the code depends upon the number of grid points specified and whether the transport calculations are performed on spectral or gray basis.

  1. Real-time computer-generated integral imaging and 3D image calibration for augmented reality surgical navigation.

    PubMed

    Wang, Junchen; Suenaga, Hideyuki; Liao, Hongen; Hoshi, Kazuto; Yang, Liangjing; Kobayashi, Etsuko; Sakuma, Ichiro

    2015-03-01

    Autostereoscopic 3D image overlay for augmented reality (AR) based surgical navigation has been studied and reported many times. For the purpose of surgical overlay, the 3D image is expected to have the same geometric shape as the original organ, and can be transformed to a specified location for image overlay. However, how to generate a 3D image with high geometric fidelity and quantitative evaluation of 3D image's geometric accuracy have not been addressed. This paper proposes a graphics processing unit (GPU) based computer-generated integral imaging pipeline for real-time autostereoscopic 3D display, and an automatic closed-loop 3D image calibration paradigm for displaying undistorted 3D images. Based on the proposed methods, a novel AR device for 3D image surgical overlay is presented, which mainly consists of a 3D display, an AR window, a stereo camera for 3D measurement, and a workstation for information processing. The evaluation on the 3D image rendering performance with 2560×1600 elemental image resolution shows the rendering speeds of 50-60 frames per second (fps) for surface models, and 5-8 fps for large medical volumes. The evaluation of the undistorted 3D image after the calibration yields sub-millimeter geometric accuracy. A phantom experiment simulating oral and maxillofacial surgery was also performed to evaluate the proposed AR overlay device in terms of the image registration accuracy, 3D image overlay accuracy, and the visual effects of the overlay. The experimental results show satisfactory image registration and image overlay accuracy, and confirm the system usability.

  2. A Fast Full Tensor Gravity computation algorithm for High Resolution 3D Geologic Interpretations

    NASA Astrophysics Data System (ADS)

    Jayaram, V.; Crain, K.; Keller, G. R.

    2011-12-01

    We present an algorithm to rapidly calculate the vertical gravity and full tensor gravity (FTG) values due to a 3-D geologic model. This algorithm can be implemented on single, multi-core CPU and graphical processing units (GPU) architectures. Our technique is based on the line element approximation with a constant density within each grid cell. This type of parameterization is well suited for high-resolution elevation datasets with grid size typically in the range of 1m to 30m. The large high-resolution data grids in our studies employ a pre-filtered mipmap pyramid type representation for the grid data known as the Geometry clipmap. The clipmap was first introduced by Microsoft Research in 2004 to do fly-through terrain visualization. This method caches nested rectangular extents of down-sampled data layers in the pyramid to create view-dependent calculation scheme. Together with the simple grid structure, this allows the gravity to be computed conveniently on-the-fly, or stored in a highly compressed format. Neither of these capabilities has previously been available. Our approach can perform rapid calculations on large topographies including crustal-scale models derived from complex geologic interpretations. For example, we used a 1KM Sphere model consisting of 105000 cells at 10m resolution with 100000 gravity stations. The line element approach took less than 90 seconds to compute the FTG and vertical gravity on an Intel Core i7 CPU at 3.07 GHz utilizing just its single core. Also, unlike traditional gravity computational algorithms, the line-element approach can calculate gravity effects at locations interior or exterior to the model. The only condition that must be met is the observation point cannot be located directly above the line element. Therefore, we perform a location test and then apply appropriate formulation to those data points. We will present and compare the computational performance of the traditional prism method versus the line element

  3. A Computational Method for 3D Anisotropic Travel-time Tomography of Rocks in the Laboratory

    NASA Astrophysics Data System (ADS)

    Ghofranitabari, Mehdi; Young, R. Paul

    2013-04-01

    True triaxial loading in the laboratory applies three principal stresses on a cubic rock specimen. Elliptical anisotropy and distributed heterogeneities are introduced in the rock due to closure and opening of the pre-existing cracks and creation and growth of the new aligned cracks. The rock sample is tested in a Geophysical Imaging Cell that is armed with an Acoustic Emission monitoring system which can perform transducer to transducer velocity surveys to image velocity structure of the sample during the experiment. Ultrasonic travel-time tomography as a non-destructive method outfits a map of wave propagation velocity in the sample in order to detect the uniformly distributed or localised heterogeneities and provide the spatial variation and temporal evolution of induced damages in rocks at various stages of loading. The rock sample is partitioned into cubic grid cells as model space. Ray-based tomography method measuring body wave travel time along ray paths between pairs of emitting and receiving transducers is used to calculate isotropic ray-path segment matrix elements (Gij) which contain segment lengths of the ith ray in the jth cell in three dimensions. Synthetic P wave travel times are computed between pairs of transducers in a hypothetical isotropic heterogeneous cubic sample as data space along with an error due to precision of measurement. 3D strain of the squeezed rock and the consequent geometrical deformation is also included in computations for further accuracy. Singular Value Decomposition method is used for the inversion from data space to model space. In the next step, the anisotropic ray-path segment matrix and the corresponded data space are computed for hypothetical anisotropic heterogeneous samples based on the elliptical anisotropic model of velocity which is obtained from the real laboratory experimental data. The method is examined for several different synthetic heterogeneous models. An "Inaccuracy factor" is utilized to inquire the

  4. Machine Learning, deep learning and optimization in computer vision

    NASA Astrophysics Data System (ADS)

    Canu, Stéphane

    2017-03-01

    As quoted in the Large Scale Computer Vision Systems NIPS workshop, computer vision is a mature field with a long tradition of research, but recent advances in machine learning, deep learning, representation learning and optimization have provided models with new capabilities to better understand visual content. The presentation will go through these new developments in machine learning covering basic motivations, ideas, models and optimization in deep learning for computer vision, identifying challenges and opportunities. It will focus on issues related with large scale learning that is: high dimensional features, large variety of visual classes, and large number of examples.

  5. Computer vision system for three-dimensional inspection

    NASA Astrophysics Data System (ADS)

    Penafiel, Francisco; Fernandez, Luis; Campoy, Pascual; Aracil, Rafael

    1994-11-01

    In the manufacturing process certain workpieces are inspected for dimensional measurement using sophisticated quality control techniques. During the operation phase, these parts are deformed due to the high temperatures involved in the process. The evolution of the workpieces structure is noticed on their dimensional modification. This evolution can be measured with a set of dimensional parameters. In this paper, a three dimensional automatic inspection of these parts is proposed. The aim is the measuring of some workpieces features through 3D control methods using directional lighting and a computer artificial vision system. The results of this measuring must be compared with the parameters obtained after the manufacturing process in order to determine the degree of deformation of the workpiece and decide whether it is still usable or not. Workpieces outside a predetermined specification range must be discarded and replaced by new ones. The advantage of artificial vision methods is based on the fact that there is no need to get in touch with the object to inspect. This makes feasible its use in hazardous environments, not suitable for human beings. A system has been developed and applied to the inspection of fuel assemblies in nuclear power plants. Such a system has been implemented in a very high level of radiation environment and operates in underwater conditions. The physical dimensions of a nuclear fuel assembly are modified after its operation in a nuclear power plant in relation to the original dimensions after its manufacturing. The whole system (camera, mechanical and illumination systems and the radioactive fuel assembly) is submerged in water for minimizing radiation effects and is remotely controlled by human intervention. The developed system has to inspect accurately a set of measures on the fuel assembly surface such as length, twists, arching, etc. The present project called SICOM (nuclear fuel assembly inspection system) is included into the R

  6. Does this computational theory solve the right problem? Marr, Gibson, and the goal of vision.

    PubMed

    Warren, William H

    2012-01-01

    David Marr's book Vision attempted to formulate athoroughgoing formal theory of perception. Marr borrowed much of the "computational" level from James Gibson: a proper understanding of the goal of vision, the natural constraints, and the available information are prerequisite to describing the processes and mechanisms by which the goal is achieved. Yet, as a research program leading to a computational model of human vision, Marr's program did not succeed. This article asks why, using the perception of 3D shape as a morality tale. Marr presumed that the goal of vision is to recover a general-purpose Euclidean description of the world, which can be deployed for any task or action. On this formulation, vision is underdetermined by information, which in turn necessitates auxiliary assumptions to solve the problem. But Marr's assumptions did not actually reflect natural constraints, and consequently the solutions were not robust. We now know that humans do not in fact recover Euclidean structure--rather, they reliably perceive qualitative shape (hills, dales, courses, ridges), which is specified by the second-order differential structure of images. By recasting the goals of vision in terms of our perceptual competencies, and doing the hard work of analyzing the information available under ecological constraints, we can reformulate the problem so that perception is determined by information and prior knowledge is unnecessary.

  7. A supervisor for the successive 3D computations of magnetic, mechanical and acoustic quantities in power oil inductors and transformers

    SciTech Connect

    Reyne, G.; Magnin, H.; Berliat, G.; Clerc, C.

    1994-09-01

    A supervisor has been developed so as to allow successive 3D computations of different quantities by different softwares on the same physical problem. Noise of a given power oil transformer can be deduced from the surface vibrations of the tank. These vibrations are obtained through a mechanic computation whose Inputs are the electromagnetic forces provided . by an electromagnetic computation. Magnetic, mechanic and acoustic experimental data are compared with the results of the 3D computations. Stress Is put on the main characteristics of the supervisor such as the transfer of a given quantity from one mesh to the other.

  8. Computer Vision: Discovery And Opportunity Await

    NASA Astrophysics Data System (ADS)

    Mcintosh, S. W.

    2014-12-01

    Current solar image archives contain information, lots of information, often so much information that "end-point science" is not immediately clear at the start of a project to survey them. However, in such datasets and their metadata, significant scientific could be hidden just a few queries below the surface. We will discuss quite possibly the largest astronomical database, the "EUV Brightpoint Database", which contains information about over 200 million individual features that are ubiquitously observed in the Sun's corona - EUV Brightpoints (or BPs). While end-point science was not clear in 2002 when the project to catalog the Sun's BPs in the archive of SOHO's Extreme Ultraviolet Telescope (EIT) images began the impact of those few queries could cause a quite a stir in the field. Our systematic analysis of BPs, and the magnetic scale on which they appear to form, allowed us to demonstrate that the landmarks of sunspot cycle 23 could be explained in terms of the evolution and interaction of latitudinally and temporally overlapping bands of magnetic activity. Those bands appear to belong to the Sun's 22-year magnetic activity cycle. The patterns that these bands make closely match helioseismic inference of the Sun's torsional oscillation - a signature of rotational anomalies taking place the Sun's interior. The high-latitude origin and start dates preceding sunspot formation by more than a decade - on the same activity band - pose a significant challenge to our understanding of the processes responsible for the production of the Sun's quasi-decadal variability. We sincerely doubt that the BP database is alone in containing information of such potential scientific value. Often one just has to get lucky, before being able to formulate the correct queries. We hope that the material presented in this talk can motivate a scientific exploitation of the computer vision databases currently being built from the stunning images of our star in addition to some retrospective

  9. A new 3D texture feature based computer-aided diagnosis approach to differentiate pulmonary nodules

    NASA Astrophysics Data System (ADS)

    Han, Fangfang; Wang, Huafeng; Song, Bowen; Zhang, Guopeng; Lu, Hongbing; Moore, William; Zhao, Hong; Liang, Zhengrong

    2013-02-01

    To distinguish malignant pulmonary nodules from benign ones is of much importance in computer-aided diagnosis of lung diseases. Compared to many previous methods which are based on shape or growth assessing of nodules, this proposed three-dimensional (3D) texture feature based approach extracted fifty kinds of 3D textural features from gray level, gradient and curvature co-occurrence matrix, and more derivatives of the volume data of the nodules. To evaluate the presented approach, the Lung Image Database Consortium public database was downloaded. Each case of the database contains an annotation file, which indicates the diagnosis results from up to four radiologists. In order to relieve partial-volume effect, interpolation process was carried out to those volume data with image slice thickness more than 1mm, and thus we had categorized the downloaded datasets to five groups to validate the proposed approach, one group of thickness less than 1mm, two types of thickness range from 1mm to 1.25mm and greater than 1.25mm (each type contains two groups, one with interpolation and the other without). Since support vector machine is based on statistical learning theory and aims to learn for predicting future data, so it was chosen as the classifier to perform the differentiation task. The measure on the performance was based on the area under the curve (AUC) of Receiver Operating Characteristics. From 284 nodules (122 malignant and 162 benign ones), the validation experiments reported a mean of 0.9051 and standard deviation of 0.0397 for the AUC value on average over 100 randomizations.

  10. Projection-based metal-artifact reduction for industrial 3D X-ray computed tomography.

    PubMed

    Amirkhanov, Artem; Heinzl, Christoph; Reiter, Michael; Kastner, Johann; Gröller, M Eduard

    2011-12-01

    Multi-material components, which contain metal parts surrounded by plastic materials, are highly interesting for inspection using industrial 3D X-ray computed tomography (3DXCT). Examples of this application scenario are connectors or housings with metal inlays in the electronic or automotive industry. A major problem of this type of components is the presence of metal, which causes streaking artifacts and distorts the surrounding media in the reconstructed volume. Streaking artifacts and dark-band artifacts around metal components significantly influence the material characterization (especially for the plastic components). In specific cases these artifacts even prevent a further analysis. Due to the nature and the different characteristics of artifacts, the development of an efficient artifact-reduction technique in reconstruction-space is rather complicated. In this paper we present a projection-space pipeline for metal-artifacts reduction. The proposed technique first segments the metal in the spatial domain of the reconstructed volume in order to separate it from the other materials. Then metal parts are forward-projected on the set of projections in a way that metal-projection regions are treated as voids. Subsequently the voids, which are left by the removed metal, are interpolated in the 2D projections. Finally, the metal is inserted back into the reconstructed 3D volume during the fusion stage. We present a visual analysis tool, allowing for interactive parameter estimation of the metal segmentation. The results of the proposed artifact-reduction technique are demonstrated on a test part as well as on real world components. For these specimens we achieve a significant reduction of metal artifacts, allowing an enhanced material characterization. © 2010 IEEE

  11. Parallel computing simulation of electrical excitation and conduction in the 3D human heart.

    PubMed

    Di Yu; Dongping Du; Hui Yang; Yicheng Tu

    2014-01-01

    A correctly beating heart is important to ensure adequate circulation of blood throughout the body. Normal heart rhythm is produced by the orchestrated conduction of electrical signals throughout the heart. Cardiac electrical activity is the resulted function of a series of complex biochemical-mechanical reactions, which involves transportation and bio-distribution of ionic flows through a variety of biological ion channels. Cardiac arrhythmias are caused by the direct alteration of ion channel activity that results in changes in the AP waveform. In this work, we developed a whole-heart simulation model with the use of massive parallel computing with GPGPU and OpenGL. The simulation algorithm was implemented under several different versions for the purpose of comparisons, including one conventional CPU version and two GPU versions based on Nvidia CUDA platform. OpenGL was utilized for the visualization / interaction platform because it is open source, light weight and universally supported by various operating systems. The experimental results show that the GPU-based simulation outperforms the conventional CPU-based approach and significantly improves the speed of simulation. By adopting modern computer architecture, this present investigation enables real-time simulation and visualization of electrical excitation and conduction in the large and complicated 3D geometry of a real-world human heart.

  12. Novel 3D hexapod computer-assisted orthopaedic surgery system for closed diaphyseal fracture reduction.

    PubMed

    Tang, Peifu; Hu, Lei; Du, Hailong; Gong, Minli; Zhang, Lihai

    2012-03-01

    Long-bone fractures are very common in trauma centers. The conventional Arbeitsgemeindschaft fur Osteosynthesefragen (AO) technique contributes to most fracture healing problems, and external fixation technology also has several disadvantages, so new techniques are being explored. A novel hexapod computer-assisted fracture reduction system based on a 3D-CT image reconstruction process is presented for closed reduction of long-bone diaphyseal fractures. A new reduction technique and upgraded reduction device are described and the whole system has been validated. Ten bovine femoral fracture models were used with random fracture patterns. Tests results were as follows: residual deviation 1.24 + 0.65 mm for the axial deflection, 1.19 + 0.37 mm for the translation, 2.34 + 1.79° for the angulation, and 2.83 + 0.9° for the rotation. The reduction mechanism has the advantages of high positioning, reduction and computer accuracy, and intra-operative stability for both patients and surgical team. With further investigation, it could be applied in many kinds of long-bone diaphyseal fractures. Copyright © 2011 John Wiley & Sons, Ltd.

  13. Human identification through frontal sinus 3D superimposition: Pilot study with Cone Beam Computer Tomography.

    PubMed

    Beaini, Thiago Leite; Duailibi-Neto, Eduardo F; Chilvarquer, Israel; Melani, Rodolfo F H

    2015-11-01

    As a unique anatomical feature of the human body, the frontal sinus morphology has been used for identification of unknown bodies with many techniques, mostly using 2D postero-anterior X-rays. With the increase of the use of Cone-Beam Computer Tomography (CBCT), the availability of this exam as ante-mortem records should be considered. The purpose of this study is to establish a new technique for frontal sinus identification through direct superimposition of 3D volumetric models obtained from CBCT exam, by testing two distinct situations. First, a reproducibility test, where two observers independently rendered models of frontal sinus from a sample 20 CBCT exams and identified them on each other's list. In the second situation, one observer tested the protocol and established on different exams of three individual. Using the open source DICOM viewer InVesallius(®) for rendering, Mesh Lab(®,) for positioning the models and CloudCompare for volumetric comparison, both observers matched cases with 100% accuracy and the level of coincidence in a identification situation. The uniqueness of the frontal sinus topography is remarkable and through the described technique, can be used in forensic as an identification method whenever both the sinus structure and antemortem computer tomography is available. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  14. Quantification of substrate and cellular strains in stretchable 3D cell cultures: an experimental and computational framework.

    PubMed

    González-Avalos, P; Mürnseer, M; Deeg, J; Bachmann, A; Spatz, J; Dooley, S; Eils, R; Gladilin, E

    2017-05-01

    The mechanical cell environment is a key regulator of biological processes . In living tissues, cells are embedded into the 3D extracellular matrix and permanently exposed to mechanical forces. Quantification of the cellular strain state in a 3D matrix is therefore the first step towards understanding how physical cues determine single cell and multicellular behaviour. The majority of cell assays are, however, based on 2D cell cultures that lack many essential features of the in vivo cellular environment. Furthermore, nondestructive measurement of substrate and cellular mechanics requires appropriate computational tools for microscopic image analysis and interpretation. Here, we present an experimental and computational framework for generation and quantification of the cellular strain state in 3D cell cultures using a combination of 3D substrate stretcher, multichannel microscopic imaging and computational image analysis. The 3D substrate stretcher enables deformation of living cells embedded in bead-labelled 3D collagen hydrogels. Local substrate and cell deformations are determined by tracking displacement of fluorescent beads with subsequent finite element interpolation of cell strains over a tetrahedral tessellation. In this feasibility study, we debate diverse aspects of deformable 3D culture construction, quantification and evaluation, and present an example of its application for quantitative analysis of a cellular model system based on primary mouse hepatocytes undergoing transforming growth factor (TGF-β) induced epithelial-to-mesenchymal transition. © 2017 The Authors. Journal of Microscopy published by JohnWiley & Sons Ltd on behalf of Royal Microscopical Society.

  15. Target discrimination using computational vision human perception models

    NASA Astrophysics Data System (ADS)

    Lindquist, George H.; Witus, Gary; Cook, Thomas H.; Freeling, J. Richard; Gerhart, Grant R.

    1994-07-01

    The current DoD target acquisition models have two primary deficiencies: they use simplistic representations of the vehicle and background signatures, and a highly simplified description of the human observer. The current signature representation often fails for complex signature configurations, yields inaccurate detectability and marginal pay-off predictions for low signature vehicles, is not extensible to false alarms and temporal cues, and precludes vehicle design guidance and diagnosis. The current human observer model is simplified to the same degree as the signature representation, and as such is not extensible to high fidelity signature representations. In answer to the noted deficiencies, we have developed the TARDEC visual model (TVM). We have adopted an alternative approach that is based on emerging academic computational vision models (CVM). Our approach is tailored to visual signatures, though the model is applicable to thermal, SAR as well as other categories of imagery. Color imagery, input to the model, is initially transformed into a 3D color-opponent space comprising luminance, red-green, and yellow- blue axes. Each plane in the color-opponent space is then decomposed by local, oriented spatial frequency analyzers (Gabor or wavelet filters) in keeping with current knowledge of retinal/cortical processing. Signal-to-noise statistics are then calculated on each channel, appropriately aggregated over all channels, and used within the signal detection theory context to predict detection and false alarm probabilities.

  16. Neural Networks for Computer Vision: A Framework for Specifications of a General Purpose Vision System

    NASA Astrophysics Data System (ADS)

    Skrzypek, Josef; Mesrobian, Edmond; Gungner, David J.

    1989-03-01

    The development of autonomous land vehicles (ALV) capable of operating in an unconstrained environment has proven to be a formidable research effort. The unpredictability of events in such an environment calls for the design of a robust perceptual system, an impossible task requiring the programming of a system bases on the expectation of future, unconstrained events. Hence, the need for a "general purpose" machine vision system that is capable of perceiving and understanding images in an unconstrained environment in real-time. The research undertaken at the UCLA Machine Perception Laboratory addresses this need by focusing on two specific issues: 1) the long term goals for machine vision research as a joint effort between the neurosciences and computer science; and 2) a framework for evaluating progress in machine vision. In the past, vision research has been carried out independently within different fields including neurosciences, psychology, computer science, and electrical engineering. Our interdisciplinary approach to vision research is based on the rigorous combination of computational neuroscience, as derived from neurophysiology and neuropsychology, with computer science and electrical engineering. The primary motivation behind our approach is that the human visual system is the only existing example of a "general purpose" vision system and using a neurally based computing substrate, it can complete all necessary visual tasks in real-time.

  17. Computer graphics visions and challenges: a European perspective.

    PubMed

    Encarnação, José L

    2006-01-01

    I have briefly described important visions and challenges in computer graphics. They are a personal and therefore subjective selection. But most of these issues have to be addressed and solved--no matter if we call them visions or challenges or something else--if we want to make and further develop computer graphics into a key enabling technology for our IT-based society.

  18. Computer-assisted diagnostic system for neurodegenerative dementia using brain SPECT and 3D-SSP.

    PubMed

    Ishii, Kazunari; Kanda, Tomonori; Uemura, Takafumi; Miyamoto, Naokazu; Yoshikawa, Toshiki; Shimada, Kenichi; Ohkawa, Shingo; Minoshima, Satoshi

    2009-05-01

    To develop a computer-assisted automated diagnostic system to distinguish among Alzheimer disease (AD), dementia with Lewy bodies (DLB), and other degenerative disorders in patients with mild dementia. Single photon emission computed tomography (SPECT) images with injection of N-Isopropyl-p-[(123)I]iodoamphetamine (IMP) were obtained from patients with mild degenerative dementia. First, datasets from 20 patients mild AD, 15 patients with dementia with DLB, and 17 healthy controls were used to develop an automated diagnosing system based on three-dimensional stereotactic surface projections (3D-SSP). AD- and DLB-specific regional templates were created using 3D-SSP, and critical Z scores in the templates were established. Datasets from 50 AD patients, 8 DLB patients, and 10 patients with non-AD/DLB type degenerative dementia (5 with frontotemporal dementia and 5 with progressive supranuclear palsy) were then used to test the diagnostic accuracy of the optimized automated system in comparison to the diagnostic interpretation of conventional IMP-SPECT images. These comparisons were performed to differentiate AD and DLB from non-AD/DLB and to distinguish AD from DLB. A receiver operating characteristic (ROC) analysis was performed. The area under the ROC curve (Az) and the accuracy of the automated diagnosis system were 0.89 and 82%, respectively, for AD/DLB vs. non-AD/DLB patients, and 0.70 and 65%, respectively, for AD vs. DLB patients. The mean Az and the accuracy of the visual inspection were 0.84 and 77%, respectively, for AD/DLB vs. non-AD/DLB patients, and 0.70 and 65%, respectively, for AD vs. DLB patients. The mean Az and the accuracy of the combination of visual inspection and this system were 0.96 and 91%, respectively, for AD/DLB vs. non-AD/DLB patients, and 0.70 and 66%, respectively, for AD vs. DLB patients. The system developed in the present study achieved as good discrimination of AD, DLB, and other degenerative disorders in patients with mild dementia

  19. 3D printing meets computational astrophysics: deciphering the structure of η Carinae's inner colliding winds

    NASA Astrophysics Data System (ADS)

    Madura, T. I.; Clementel, N.; Gull, T. R.; Kruip, C. J. H.; Paardekooper, J.-P.

    2015-06-01

    We present the first 3D prints of output from a supercomputer simulation of a complex astrophysical system, the colliding stellar winds in the massive (≳120 M⊙), highly eccentric (e ˜ 0.9) binary star system η Carinae. We demonstrate the methodology used to incorporate 3D interactive figures into a PDF (Portable Document Format) journal publication and the benefits of using 3D visualization and 3D printing as tools to analyse data from multidimensional numerical simulations. Using a consumer-grade 3D printer (MakerBot Replicator 2X), we successfully printed 3D smoothed particle hydrodynamics simulations of η Carinae's inner (r ˜ 110 au) wind-wind collision interface at multiple orbital phases. The 3D prints and visualizations reveal important, previously unknown `finger-like' structures at orbital phases shortly after periastron (φ ˜ 1.045) that protrude radially outwards from the spiral wind-wind collision region. We speculate that these fingers are related to instabilities (e.g. thin-shell, Rayleigh-Taylor) that arise at the interface between the radiatively cooled layer of dense post-shock primary-star wind and the fast (3000 km s-1), adiabatic post-shock companion-star wind. The success of our work and easy identification of previously unrecognized physical features highlight the important role 3D printing and interactive graphics can play in the visualization and understanding of complex 3D time-dependent numerical simulations of astrophysical phenomena.

  20. Memory usage reduction and intensity modulation for 3D holographic display using non-uniformly sampled computer-generated holograms

    NASA Astrophysics Data System (ADS)

    Zhang, Zhao; Liu, Juan; Jia, Jia; Li, Xin; Pan, Yijie; Han, Jian; Hu, Bin; Wang, Yongtian

    2013-12-01

    The real-time holographic display encounters heavy computational load of computer-generated holograms and precisely intensity modulation of 3D images reconstructed by phase-only holograms. In this study, we demonstrate a method for reducing memory usage and modulating the intensity in 3D holographic display. The proposed method can eliminate the redundant information of holograms by employing the non-uniform sampling technique. By combining with the novel look-up table method, 70% reduction in the storage amount can be reached. The gray-scale modulation of 3D images reconstructed by phase-only holograms can be extended either. We perform both numerical simulations and optical experiments to verify the practicability of this method, and the results match well with each other. It is believed that the proposed method can be used in 3D dynamic holographic display and design of the diffractive phase elements.

  1. The effects of 3D interactive animated graphics on student learning and attitudes in computer-based instruction

    NASA Astrophysics Data System (ADS)

    Moon, Hye Sun

    Visuals are most extensively used as instructional tools in education to present spatially-based information. Recent computer technology allows the generation of 3D animated visuals to extend the presentation in computer-based instruction. Animated visuals in 3D representation not only possess motivational value that promotes positive attitudes toward instruction but also facilitate learning when the subject matter requires dynamic motion and 3D visual cue. In this study, three questions are explored: (1) how 3D graphics affects student learning and attitude, in comparison with 2D graphics; (2) how animated graphics affects student learning and attitude, in comparison with static graphics; and (3) whether the use of 3D graphics, when they are supported by interactive animation, is the most effective visual cues to improve learning and to develop positive attitudes. A total of 145 eighth-grade students participated in a 2 x 2 factorial design study. The subjects were randomly assigned to one of four computer-based instructions: 2D static; 2D animated; 3D static; and 3D animated. The results indicated that: (1) Students in the 3D graphic condition exhibited more positive attitudes toward instruction than those in the 2D graphic condition. No group differences were found between the posttest score of 3D graphic condition and that of 2D graphic condition. However, students in the 3D graphic condition took less time for information retrieval on posttest than those in the 2D graphic condition. (2) Students in the animated graphic condition exhibited slightly more positive attitudes toward instruction than those in the static graphic condition. No group differences were found between the posttest score of animated graphic condition and that of static graphic condition. However, students in the animated graphic condition took less time for information retrieval on posttest than those in the static graphic condition. (3) Students in the 3D animated graphic condition

  2. Automatic validation of computational models using pseudo-3D spatio-temporal model checking.

    PubMed

    Pârvu, Ovidiu; Gilbert, David

    2014-12-02

    Computational models play an increasingly important role in systems biology for generating predictions and in synthetic biology as executable prototypes/designs. For real life (clinical) applications there is a need to scale up and build more complex spatio-temporal multiscale models; these could enable investigating how changes at small scales reflect at large scales and viceversa. Results generated by computational models can be applied to real life applications only if the models have been validated first. Traditional in silico model checking techniques only capture how non-dimensional properties (e.g. concentrations) evolve over time and are suitable for small scale systems (e.g. metabolic pathways). The validation of larger scale systems (e.g. multicellular populations) additionally requires capturing how spatial patterns and their properties change over time, which are not considered by traditional non-spatial approaches. We developed and implemented a methodology for the automatic validation of computational models with respect to both their spatial and temporal properties. Stochastic biological systems are represented by abstract models which assume a linear structure of time and a pseudo-3D representation of space (2D space plus a density measure). Time series data generated by such models is provided as input to parameterised image processing modules which automatically detect and analyse spatial patterns (e.g. cell) and clusters of such patterns (e.g. cellular population). For capturing how spatial and numeric properties change over time the Probabilistic Bounded Linear Spatial Temporal Logic is introduced. Given a collection of time series data and a formal spatio-temporal specification the model checker Mudi ( http://mudi.modelchecking.org ) determines probabilistically if the formal specification holds for the computational model or not. Mudi is an approximate probabilistic model checking platform which enables users to choose between frequentist and

  3. A new approach of building 3D visualization framework for multimodal medical images display and computed assisted diagnosis

    NASA Astrophysics Data System (ADS)

    Li, Zhenwei; Sun, Jianyong; Zhang, Jianguo

    2012-02-01

    As more and more CT/MR studies are scanning with larger volume of data sets, more and more radiologists and clinician would like using PACS WS to display and manipulate these larger data sets of images with 3D rendering features. In this paper, we proposed a design method and implantation strategy to develop 3D image display component not only with normal 3D display functions but also with multi-modal medical image fusion as well as compute-assisted diagnosis of coronary heart diseases. The 3D component has been integrated into the PACS display workstation of Shanghai Huadong Hospital, and the clinical practice showed that it is easy for radiologists and physicians to use these 3D functions such as multi-modalities' (e.g. CT, MRI, PET, SPECT) visualization, registration and fusion, and the lesion quantitative measurements. The users were satisfying with the rendering speeds and quality of 3D reconstruction. The advantages of the component include low requirements for computer hardware, easy integration, reliable performance and comfortable application experience. With this system, the radiologists and the clinicians can manipulate with 3D images easily, and use the advanced visualization tools to facilitate their work with a PACS display workstation at any time.

  4. Craniosynostosis: prenatal diagnosis by 2D/3D ultrasound, magnetic resonance imaging and computed tomography.

    PubMed

    Helfer, Talita Micheletti; Peixoto, Alberto Borges; Tonni, Gabriele; Araujo Júnior, Edward

    2016-09-01

    Craniosynostosis is defined as the process of premature fusion of one or more of the cranial sutures. It is a common condition that occurs in about 1 to 2,000 live births. Craniosynostosis may be classified in primary or secondary. It is also classified as nonsyndromic or syndromic. According to suture commitment, craniosynostosis may affect a single suture or multiple sutures. There is a wide range of syndromes involving craniosynostosis and the most common are Apert, Pffeifer, Crouzon, Shaethre-Chotzen and Muenke syndromes. The underlying etiology of nonsyndromic craniosynostosis is unknown. Mutations in the fibroblast growth factor (FGF) signalling pathway play a crucial role in the etiology of craniosynostosis syndromes. Prenatal ultrasound`s detection rate of craniosynostosis is low. Nowadays, different methods can be applied for prenatal diagnosis of craniosynostosis, such as two-dimensional (2D) and three-dimensional (3D) ultrasound, magnetic resonance imaging (MRI), computed tomography (CT) scan and, finally, molecular diagnosis. The presence of craniosynostosis may affect the birthing process. Fetuses with craniosynostosis also have higher rates of perinatal complications. In order to avoid the risks of untreated craniosynostosis, children are usually treated surgically soon after postnatal diagnosis.

  5. GBM Volumetry using the 3D Slicer Medical Image Computing Platform

    PubMed Central

    Egger, Jan; Kapur, Tina; Fedorov, Andriy; Pieper, Steve; Miller, James V.; Veeraraghavan, Harini; Freisleben, Bernd; Golby, Alexandra J.; Nimsky, Christopher; Kikinis, Ron

    2013-01-01

    Volumetric change in glioblastoma multiforme (GBM) over time is a critical factor in treatment decisions. Typically, the tumor volume is computed on a slice-by-slice basis using MRI scans obtained at regular intervals. (3D)Slicer – a free platform for biomedical research – provides an alternative to this manual slice-by-slice segmentation process, which is significantly faster and requires less user interaction. In this study, 4 physicians segmented GBMs in 10 patients, once using the competitive region-growing based GrowCut segmentation module of Slicer, and once purely by drawing boundaries completely manually on a slice-by-slice basis. Furthermore, we provide a variability analysis for three physicians for 12 GBMs. The time required for GrowCut segmentation was on an average 61% of the time required for a pure manual segmentation. A comparison of Slicer-based segmentation with manual slice-by-slice segmentation resulted in a Dice Similarity Coefficient of 88.43 ± 5.23% and a Hausdorff Distance of 2.32 ± 5.23 mm. PMID:23455483

  6. Superresolution of 3-D computational integral imaging based on moving least square method.

    PubMed

    Kim, Hyein; Lee, Sukho; Ryu, Taekyung; Yoon, Jungho

    2014-11-17

    In this paper, we propose an edge directive moving least square (ED-MLS) based superresolution method for computational integral imaging reconstruction(CIIR). Due to the low resolution of the elemental images and the alignment error of the microlenses, it is not easy to obtain an accurate registration result in integral imaging, which makes it difficult to apply superresolution to the CIIR application. To overcome this problem, we propose the edge directive moving least square (ED-MLS) based superresolution method which utilizes the properties of the moving least square. The proposed ED-MLS based superresolution takes the direction of the edge into account in the moving least square reconstruction to deal with the abrupt brightness changes in the edge regions, and is less sensitive to the registration error. Furthermore, we propose a framework which shows how the data have to be collected for the superresolution problem in the CIIR application. Experimental results verify that the resolution of the elemental images is enhanced, and that a high resolution reconstructed 3-D image can be obtained with the proposed method.

  7. Detectability of hepatic tumors during 3D post-processed ultrafast cone-beam computed tomography

    NASA Astrophysics Data System (ADS)

    Paul, Jijo; Vogl, Thomas J.; Chacko, Annamma

    2015-10-01

    To evaluate hepatic tumor detection using ultrafast cone-beam computed tomography (UCBCT) cross-sectional and 3D post-processed image datasets. 657 patients were examined using UCBCT during hepatic transarterial chemoembolization (TACE), and data were collected retrospectively from January 2012 to September 2014. Tumor detectability, diagnostic ability, detection accuracy and sensitivity were examined for different hepatic tumors using UCBCT cross-sectional, perfusion blood volume (PBV) and UCBCT-MRI (magnetic resonance imaging) fused image datasets. Appropriate statistical tests were used to compare collected sample data. Fused image data showed the significantly higher (all P  <  0.05) diagnostic ability for hepatic tumors compared to UCBCT or PBV image data. The detectability of small hepatic tumors (<5 mm) was significantly reduced (all P  <  0.05) using UCBCT cross-sectional images compared to MRI or fused image data; however, PBV improved tumor detectability using a color display. Fused image data produced 100% tumor sensitivity due to the simultaneous availability of MRI and UCBCT information during tumor diagnosis. Fused image data produced excellent hepatic tumor sensitivity, detectability and diagnostic ability compared to other datasets assessed. Fused image data is extremely reliable and useful compared to UCBCT cross-sectional or PBV image datasets to depict hepatic tumors during TACE. Partial anatomical visualization on cross-sectional images was compensated by fused image data during tumor diagnosis.

  8. Study of strength properties of ceramic composites with soft filler based on 3D computer simulation

    NASA Astrophysics Data System (ADS)

    Smolin, Alexey Yu.; Smolin, Igor Yu.; Smolina, Irina Yu.

    2016-11-01

    The movable cellular automaton method which is a computational method of particle mechanics is applied to simulating uniaxial compression of 3D specimens of a ceramic composite. Soft inclusions were considered explicitly by changing the sort (properties) of automata selected randomly from the original fcc packing. The distribution of inclusions in space, their size, and the total fraction were varied. For each value of inclusion fraction, there were generated several representative specimens with individual pore position in space. The resulting magnitudes of the elastic modulus and strength of the specimens were scattered and well described by the Weibull distribution. We showed that to reveal the dependence of the elastic and strength properties of the composite on the inclusion fraction it is much better to consider the mathematical expectation of the corresponding Weibull distribution, rather than the average of the values for the specimens of the same inclusion fraction. It is shown that the relation between the mechanical properties of material and its inclusion fraction depends significantly on the material structure. Namely, percolation transition from isolated inclusions to interconnected clusters of inclusions strongly manifests itself in the dependence of strength on the fraction of inclusions. Thus, the curve of strength versus inclusion fraction fits different equations for a different kind of structure.

  9. Segmentation of 3D ultrasound computer tomography reflection images using edge detection and surface fitting

    NASA Astrophysics Data System (ADS)

    Hopp, T.; Zapf, M.; Ruiter, N. V.

    2014-03-01

    An essential processing step for comparison of Ultrasound Computer Tomography images to other modalities, as well as for the use in further image processing, is to segment the breast from the background. In this work we present a (semi-) automated 3D segmentation method which is based on the detection of the breast boundary in coronal slice images and a subsequent surface fitting. The method was evaluated using a software phantom and in-vivo data. The fully automatically processed phantom results showed that a segmentation of approx. 10% of the slices of a dataset is sufficient to recover the overall breast shape. Application to 16 in-vivo datasets was performed successfully using semi-automated processing, i.e. using a graphical user interface for manual corrections of the automated breast boundary detection. The processing time for the segmentation of an in-vivo dataset could be significantly reduced by a factor of four compared to a fully manual segmentation. Comparison to manually segmented images identified a smoother surface for the semi-automated segmentation with an average of 11% of differing voxels and an average surface deviation of 2mm. Limitations of the edge detection may be overcome by future updates of the KIT USCT system, allowing a fully-automated usage of our segmentation approach.

  10. Computationally designed lattices with tuned properties for tissue engineering using 3D printing

    PubMed Central

    Gonella, Veronica C.; Engensperger, Max; Ferguson, Stephen J.; Shea, Kristina

    2017-01-01

    Tissue scaffolds provide structural support while facilitating tissue growth, but are challenging to design due to diverse property trade-offs. Here, a computational approach was developed for modeling scaffolds with lattice structures of eight different topologies and assessing properties relevant to bone tissue engineering applications. Evaluated properties include porosity, pore size, surface-volume ratio, elastic modulus, shear modulus, and permeability. Lattice topologies were generated by patterning beam-based unit cells, with design parameters for beam diameter and unit cell length. Finite element simulations were conducted for each topology and quantified how elastic modulus and shear modulus scale with porosity, and how permeability scales with porosity cubed over surface-volume ratio squared. Lattices were compared with controlled properties related to porosity and pore size. Relative comparisons suggest that lattice topology leads to specializations in achievable properties. For instance, Cube topologies tend to have high elastic and low shear moduli while Octet topologies have high shear moduli and surface-volume ratios but low permeability. The developed method was utilized to analyze property trade-offs as beam diameter was altered for a given topology, and used to prototype a 3D printed lattice embedded in an interbody cage for spinal fusion treatments. Findings provide a basis for modeling and understanding relative differences among beam-based lattices designed to facilitate bone tissue growth. PMID:28797066

  11. Computational modeling of cerebral aneurysms in arterial networks reconstructed from multiple 3D rotational angiography images

    NASA Astrophysics Data System (ADS)

    Castro, Marcelo A.; Putman, Christopher M.; Cebral, Juan R.

    2005-04-01

    Previous patient-specific computational fluid dynamics (CFD) models of cerebral aneurysms constructed from 3D rotational angiography have been limited to aneurysms with a single route of blood flow. However, there are numerous aneurysms that accept blood flow from more than one avenue of flow such as aneurysms in the anterior communicating artery. Although the anatomy of these aneurysms could be visualized with other modalities such as CTA and MRA, cerebral rotational angiography has the highest resolution, and is therefore the preferred modality for vascular CFD modeling. The purpose of this paper is to present a novel methodology to construct personalized CFD models of cerebral aneurysms with multiple feeding vessels from multiple rotational angiography images. The methodology is illustrated with two examples: a model of an anterior communicating artery aneurysm constructed from bilateral rotational angiography images, and a model of the complete circle of Willis of a patient with five cerebral aneurysms. In addition, a sensitivity analysis of the intraaneurysmal flow patterns with respect to mean flow balance in the feeding vessels was performed. It was found that the flow patterns strongly depend on the geometry of the aneurysms and the connected vessels, but less on the changes in the flow balance. These types of models are important for studying the hemodynamics of cerebral aneurysms and further our understanding of the disease progression and rupture, as well as for simulating the effect of surgical and endovascular interventions.

  12. Correlation of intrapartum translabial ultrasound parameters with computed tomographic 3D reconstruction of the female pelvis.

    PubMed

    Armbrust, Robert; Henrich, Wolfgang; Hinkson, Larry; Grieser, Christian; Siedentopf, Jan-Peter

    2016-07-01

    Intrapartum translabial ultrasound [ITU] can be an objective, reproducible and more reliable method than digital vaginal examination when evaluating fetal head position and station in prolonged second stage of labor. However, two-dimensional (2D) ultrasound is not sufficient to demonstrate the ischial spines and other important "landmarks" of the female pelvis. Therefore, the purpose of this study was to evaluate the distance of the interspinous plane as a parallel line to the infrapubic line in 2D ITU with the help of 3D computed tomography and digital reconstruction. Mean distance between the infrapubic plane and the tip of the ischiadic spine was 32.35 (±4.46) mm. The mean height was 166 (±7) cm; the mean weight was 67.5 (±18.4) kg. Body height and the measured distance were significantly correlated (P=0.025; correlation coefficient of 0.5), whereas body weight was not (P=0.37; correlation coefficient of -0.214). With the present results, clinicians were enabled to transfer the reproducible measurements of the "head station" by ITU to the widespread but observer-depending vaginal examination. Furthermore, ITU can be verified as an objective method in comparison to subjective palpation with the ability to optimize the evaluation of the head station according to bony structures as landmarks in a standardized application.

  13. A computed tomography approach for understanding 3D deformation patterns in complex folds

    NASA Astrophysics Data System (ADS)

    Ramón, M.a.José; Pueyo, Emilio L.; Rodríguez-Pintó, Adriana; Ros, Luis H.; Pocoví, Andrés; Briz, José Luis; Ciria, José Carlos

    2013-05-01

    Analog models are an important tool for understanding complexly folded and faulted geological structures. In this paper, we propose the use of X-ray computed tomography to accurately reconstruct the geometry of analog models using an orthogonal reference system and to completely characterize deformation patterns within the modeled structure in 3D. The rheology and radiological contrast of various different materials have been studied showing that EVA sheets are a good choice to model buckling layers. After considering various possibilities to define the reference system, we opted to screen-print two orthogonal sets of parallel lines on the surfaces using minium (lead tetroxide). The model was then built with gOcad using a series of CT slices that can be closely spaced. This kind of model allows us to reconstruct the volume distribution of strain ellipsoids and can be very accurate and useful to ascertain the orientation of folded lineations in complex structures as well as to characterize the expected deformation on the surfaces. We have built a simple analog model inspired in the Balzes Anticline (located in the External Sierras, Southern Pyrenees) to illustrate the potential of the technique and to analyze the deformation patterns of this complex curved fold that has accommodated significant magnitudes of vertical axis rotation during its formation.

  14. Dynamic 3-D computer graphics for designing a diagnostic tool for patients with schizophrenia.

    PubMed

    Farkas, Attila; Papathomas, Thomas V; Silverstein, Steven M; Kourtev, Hristiyan; Papayanopoulos, John F

    2016-11-01

    We introduce a novel procedure that uses dynamic 3-D computer graphics as a diagnostic tool for assessing disease severity in schizophrenia patients, based on their reduced influence of top-down cognitive processes in interpreting bottom-up sensory input. Our procedure uses the hollow-mask illusion, in which the concave side of the mask is misperceived as convex, because familiarity with convex faces dominates sensory cues signaling a concave mask. It is known that schizophrenia patients resist this illusion and their resistance increases with illness severity. Our method uses virtual masks rendered with two competing textures: (a) realistic features that enhance the illusion; (b) random-dot visual noise that reduces the illusion. We control the relative weights of the two textures to obtain psychometric functions for controls and patients and assess illness severity. The primary novelty is the use of a rotating mask that is easy to implement on a wide variety of portable devices and avoids the use of elaborate stereoscopic devices that have been used in the past. Thus our method, which can also be used to assess the efficacy of treatments, provides clinicians the advantage to bring the test to the patient's own environment, instead of having to bring patients to the clinic.

  15. Computed Tomography 3-D Imaging of the Metal Deformation Flow Path in Friction Stir Welding

    NASA Technical Reports Server (NTRS)

    Schneider, Judy; Beshears, Ronald; Nunes, Arthur C., Jr.

    2005-01-01

    In friction stir welding (FSW), a rotating threaded pin tool is inserted into a weld seam and literally stirs the edges of the seam together. To determine optimal processing parameters for producing a defect free weld, a better understanding of the resulting metal deformation flow path is required. Marker studies are the principal method of studying the metal deformation flow path around the FSW pin tool. In our study, we have used computed tomography (CT) scans to reveal the flow pattern of a lead wire embedded in a FSW weld seam. At the welding temperature of aluminum, the lead becomes molten and is carried with the macro-flow of the weld metal. By using CT images, a 3-dimensional (3D) image of the lead flow pattern can be reconstructed. CT imaging was found to be a convenient and comprehensive way of collecting and displaying tracer data. It marks an advance over previous more tedious and ambiguous radiographic/metallographic data collection methods.

  16. Computer-Assisted 3D Kinematic Analysis of All Leg Joints in Walking Insects

    PubMed Central

    Bender, John A.; Simpson, Elaine M.; Ritzmann, Roy E.

    2010-01-01

    High-speed video can provide fine-scaled analysis of animal behavior. However, extracting behavioral data from video sequences is a time-consuming, tedious, subjective task. These issues are exacerbated where accurate behavioral descriptions require analysis of multiple points in three dimensions. We describe a new computer program written to assist a user in simultaneously extracting three-dimensional kinematics of multiple points on each of an insect's six legs. Digital video of a walking cockroach was collected in grayscale at 500 fps from two synchronized, calibrated cameras. We improved the legs' visibility by painting white dots on the joints, similar to techniques used for digitizing human motion. Compared to manual digitization of 26 points on the legs over a single, 8-second bout of walking (or 106,496 individual 3D points), our software achieved approximately 90% of the accuracy with 10% of the labor. Our experimental design reduced the complexity of the tracking problem by tethering the insect and allowing it to walk in place on a lightly oiled glass surface, but in principle, the algorithms implemented are extensible to free walking. Our software is free and open-source, written in the free language Python and including a graphical user interface for configuration and control. We encourage collaborative enhancements to make this tool both better and widely utilized. PMID:21049024

  17. High-Performance Active Liquid Crystalline Shutters for Stereo Computer Graphics and Other 3-D Technologies

    NASA Astrophysics Data System (ADS)

    Sergan, Tatiana; Sergan, Vassili; MacNaughton, Boyd

    2007-03-01

    Stereoscopic computer displays create a 3-D image by alternating two separate images for each of the viewer's eyes. Field-sequential viewing systems supply each eye with the appropriate image by blocking the wrong image for the wrong eye. In our work, we have developed a new mode of operation of a liquid crystal shutter that provides for highly effective blockage of undesired images when the screen is viewed in all viewing directions and eliminates color shifts associated with long turn-off times. The goal was achieved by using a π-cell filled with low-rotational-viscosity and high-birefringence fluid and additional negative birefringence films with splay optic axis distribution. The shutter demonstrates a contrast ratio higher than 800:1 for head-on viewing and 10:1 in the viewing cone of about 45°. The relaxation time of the shutter does not exceed 2 ms and is the same for all three primary colors.

  18. Computationally designed lattices with tuned properties for tissue engineering using 3D printing.

    PubMed

    Egan, Paul F; Gonella, Veronica C; Engensperger, Max; Ferguson, Stephen J; Shea, Kristina

    2017-01-01

    Tissue scaffolds provide structural support while facilitating tissue growth, but are challenging to design due to diverse property trade-offs. Here, a computational approach was developed for modeling scaffolds with lattice structures of eight different topologies and assessing properties relevant to bone tissue engineering applications. Evaluated properties include porosity, pore size, surface-volume ratio, elastic modulus, shear modulus, and permeability. Lattice topologies were generated by patterning beam-based unit cells, with design parameters for beam diameter and unit cell length. Finite element simulations were conducted for each topology and quantified how elastic modulus and shear modulus scale with porosity, and how permeability scales with porosity cubed over surface-volume ratio squared. Lattices were compared with controlled properties related to porosity and pore size. Relative comparisons suggest that lattice topology leads to specializations in achievable properties. For instance, Cube topologies tend to have high elastic and low shear moduli while Octet topologies have high shear moduli and surface-volume ratios but low permeability. The developed method was utilized to analyze property trade-offs as beam diameter was altered for a given topology, and used to prototype a 3D printed lattice embedded in an interbody cage for spinal fusion treatments. Findings provide a basis for modeling and understanding relative differences among beam-based lattices designed to facilitate bone tissue growth.

  19. Computational Analysis of Torsional Bulking Behavior of 3D 4-Directional Braided Composites Shafts

    NASA Astrophysics Data System (ADS)

    Huang, Xinrong; Liu, Ye; Hao, Wenfeng; Liu, Yinghua; Zhu, Jianguo

    2017-06-01

    The torsional bulking behavior of 3D 4-directional braided composites shafts was analyzed in this work. First, the unit cell models of 3D 4-directional braided composites shafts with different braiding angles and fiber volume fraction were built up. Then, the elastic parameters of 3D 4-directional braided composites shafts were predicted using the unit cells under different boundary conditions. Finally, the torsional bulking eigenvalues and bulking modes of the composites shafts were obtained by numerical simulation, and the effects of braiding angle and fiber volume fraction on the torsional bulking behavior of 3D 4-directional braided composites shafts were analyzed. The simulation results show that the bulking eigenvalues increase with the increase of braiding angle and fiber volume fraction. This work will play an important role in the design of 3D 4-directional braided composites shafts.

  20. Estimating mass properties of dinosaurs using laser imaging and 3D computer modelling.

    PubMed

    Bates, Karl T; Manning, Phillip L; Hodgetts, David; Sellers, William I

    2009-01-01

    Body mass reconstructions of extinct vertebrates are most robust when complete to near-complete skeletons allow the reconstruction of either physical or digital models. Digital models are most efficient in terms of time and cost, and provide the facility to infinitely modify model properties non-destructively, such that sensitivity analyses can be conducted to quantify the effect of the many unknown parameters involved in reconstructions of extinct animals. In this study we use laser scanning (LiDAR) and computer modelling methods to create a range of 3D mass models of five specimens of non-avian dinosaur; two near-complete specimens of Tyrannosaurus rex, the most complete specimens of Acrocanthosaurus atokensis and Strutiomimum sedens, and a near-complete skeleton of a sub-adult Edmontosaurus annectens. LiDAR scanning allows a full mounted skeleton to be imaged resulting in a detailed 3D model in which each bone retains its spatial position and articulation. This provides a high resolution skeletal framework around which the body cavity and internal organs such as lungs and air sacs can be reconstructed. This has allowed calculation of body segment masses, centres of mass and moments or inertia for each animal. However, any soft tissue reconstruction of an extinct taxon inevitably represents a best estimate model with an unknown level of accuracy. We have therefore conducted an extensive sensitivity analysis in which the volumes of body segments and respiratory organs were varied in an attempt to constrain the likely maximum plausible range of mass parameters for each animal. Our results provide wide ranges in actual mass and inertial values, emphasizing the high level of uncertainty inevitable in such reconstructions. However, our sensitivity analysis consistently places the centre of mass well below and in front of hip joint in each animal, regardless of the chosen combination of body and respiratory structure volumes. These results emphasize that future

  1. Computational fluid dynamics simulations of blood flow regularized by 3D phase contrast MRI.

    PubMed

    Rispoli, Vinicius C; Nielsen, Jon F; Nayak, Krishna S; Carvalho, Joao L A

    2015-11-26

    Phase contrast magnetic resonance imaging (PC-MRI) is used clinically for quantitative assessment of cardiovascular flow and function, as it is capable of providing directly-measured 3D velocity maps. Alternatively, vascular flow can be estimated from model-based computation fluid dynamics (CFD) calculations. CFD provides arbitrarily high resolution, but its accuracy hinges on model assumptions, while velocity fields measured with PC-MRI generally do not satisfy the equations of fluid dynamics, provide limited resolution, and suffer from partial volume effects. The purpose of this study is to develop a proof-of-concept numerical procedure for constructing a simulated flow field that is influenced by both direct PC-MRI measurements and a fluid physics model, thereby taking advantage of both the accuracy of PC-MRI and the high spatial resolution of CFD. The use of the proposed approach in regularizing 3D flow fields is evaluated. The proposed algorithm incorporates both a Newtonian fluid physics model and a linear PC-MRI signal model. The model equations are solved numerically using a modified CFD algorithm. The numerical solution corresponds to the optimal solution of a generalized Tikhonov regularization, which provides a flow field that satisfies the flow physics equations, while being close enough to the measured PC-MRI velocity profile. The feasibility of the proposed approach is demonstrated on data from the carotid bifurcation of one healthy volunteer, and also from a pulsatile carotid flow phantom. The proposed solver produces flow fields that are in better agreement with direct PC-MRI measurements than CFD alone, and converges faster, while closely satisfying the fluid dynamics equations. For the implementation that provided the best results, the signal-to-error ratio (with respect to the PC-MRI measurements) in the phantom experiment was 6.56 dB higher than that of conventional CFD; in the in vivo experiment, it was 2.15 dB higher. The proposed approach

  2. Estimating Mass Properties of Dinosaurs Using Laser Imaging and 3D Computer Modelling

    PubMed Central

    Bates, Karl T.; Manning, Phillip L.; Hodgetts, David; Sellers, William I.

    2009-01-01

    Body mass reconstructions of extinct vertebrates are most robust when complete to near-complete skeletons allow the reconstruction of either physical or digital models. Digital models are most efficient in terms of time and cost, and provide the facility to infinitely modify model properties non-destructively, such that sensitivity analyses can be conducted to quantify the effect of the many unknown parameters involved in reconstructions of extinct animals. In this study we use laser scanning (LiDAR) and computer modelling methods to create a range of 3D mass models of five specimens of non-avian dinosaur; two near-complete specimens of Tyrannosaurus rex, the most complete specimens of Acrocanthosaurus atokensis and Strutiomimum sedens, and a near-complete skeleton of a sub-adult Edmontosaurus annectens. LiDAR scanning allows a full mounted skeleton to be imaged resulting in a detailed 3D model in which each bone retains its spatial position and articulation. This provides a high resolution skeletal framework around which the body cavity and internal organs such as lungs and air sacs can be reconstructed. This has allowed calculation of body segment masses, centres of mass and moments or inertia for each animal. However, any soft tissue reconstruction of an extinct taxon inevitably represents a best estimate model with an unknown level of accuracy. We have therefore conducted an extensive sensitivity analysis in which the volumes of body segments and respiratory organs were varied in an attempt to constrain the likely maximum plausible range of mass parameters for each animal. Our results provide wide ranges in actual mass and inertial values, emphasizing the high level of uncertainty inevitable in such reconstructions. However, our sensitivity analysis consistently places the centre of mass well below and in front of hip joint in each animal, regardless of the chosen combination of body and respiratory structure volumes. These results emphasize that future

  3. Does this computational theory solve the right problem? Marr, Gibson, and the goal of vision

    PubMed Central

    Warren, William H.

    2013-01-01

    David Marr’s (1982) book Vision attempted to formulate a thoroughgoing formal theory of perception. Marr borrowed much of the “computational” level from James Gibson: a proper understanding of the goal of vision, the natural constraints, and the available information is prerequisite to describing the processes and mechanisms by which the goal is achieved. Yet as a research program leading to a computational model of human vision, Marr’s program did not succeed. This article asks why, using the perception of 3D shape as a morality tale. Marr presumed that the goal of vision is to recover a general-purpose Euclidean description of the world, which can be deployed for any task or action. On this formulation, vision is underdetermined by information, which in turn necessitates auxiliary assumptions to solve the problem. But Marr’s assumptions did not actually reflect natural constraints, and consequently the solutions were not robust. We now know that humans do not in fact recover Euclidean structure – rather, they reliably perceive qualitative shape (hills, dales, courses, ridges), which is specified by the 2nd-order differential structure of images. By recasting the goals of vision in terms our perceptual competencies, and doing the hard work of analyzing the information available under ecological constraints, we can reformulate the problem so that perception is determined by information and prior knowledge is unnecessary. PMID:23409371

  4. Non-intubated subxiphoid uniportal video-assisted thoracoscopic thymectomy using glasses-free 3D vision

    PubMed Central

    Jiang, Long; Liu, Jun; Shao, Wenlong; Li, Jingpei

    2016-01-01

    Trans-sternal thymectomy has long been accepted as the standard surgical procedure for thymic masses. Recently, minimally invasive methods, such as video-assisted thoracoscopic surgery (VATS) and, even more recently, non-intubated anesthesia, have emerged. These methods provide advantages including reductions in surgical trauma, postoperative associated pain, and in regards to VATS, provide certain cosmetic benefits. Considering these advantages, we herein present a case of subxiphoid uniportal VATS for thymic mass using a glasses-free 3D thoracoscopic display system. PMID:28149591

  5. Image quality and effective dose of a robotic flat panel 3D C-arm vs computed tomography.

    PubMed

    Kraus, Michael; Fischer, Eric; Gebhard, Florian; Richter, Peter H

    2016-12-01

    The aim of this study was to determine the effective dose and corresponding image quality of different imaging protocols of a robotic 3D flat panel C-arm in comparison to computed tomography (CT). Dose measurements were performed using a Rando-Alderson Phantom. The phantom was exposed to different scanning protocols of the 3D C-arm and the CT. Pedicle screws were inserted in a fresh swine cadaver. Images were obtained using the same scanning protocols. At the thoracolumbar junction, the effective dose was comparable for 3D high-dose protocols, with (4.4 mSv) and without (4.3 mSv) collimation and routine CT (5 mSv), as well as a dose-reduction CT (4.0 mSv). A relevant reduction was achieved with the 3D low-dose protocol (1.0 mSv). Focusing on Th6, a similar reduction with the 3D low-dose protocol was achieved. The image quality of the 3D protocols using titanium screws was rated as 'good' by all viewers, with excellent correlation. Modern intra-operative 3D-C-arms produce images of CT-like quality with low-dose radiation. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  6. Precise 3D Lug Pose Detection Sensor for Automatic Robot Welding Using a Structured-Light Vision System

    PubMed Central

    Park, Jae Byung; Lee, Seung Hun; Lee, Il Jae

    2009-01-01

    In this study, we propose a precise 3D lug pose detection sensor for automatic robot welding of a lug to a huge steel plate used in shipbuilding, where the lug is a handle to carry the huge steel plate. The proposed sensor consists of a camera and four laser line diodes, and its design parameters are determined by analyzing its detectable range and resolution. For the lug pose acquisition, four laser lines are projected on both lug and plate, and the projected lines are detected by the camera. For robust detection of the projected lines against the illumination change, the vertical threshold, thinning, Hough transform and separated Hough transform algorithms are successively applied to the camera image. The lug pose acquisition is carried out by two stages: the top view alignment and the side view alignment. The top view alignment is to detect the coarse lug pose relatively far from the lug, and the side view alignment is to detect the fine lug pose close to the lug. After the top view alignment, the robot is controlled to move close to the side of the lug for the side view alignment. By this way, the precise 3D lug pose can be obtained. Finally, experiments with the sensor prototype are carried out to verify the feasibility and effectiveness of the proposed sensor. PMID:22400007

  7. Analysis of 3D Prints by X-ray Computed Microtomography and Terahertz Pulsed Imaging.

    PubMed

    Markl, Daniel; Zeitler, J Axel; Rasch, Cecilie; Michaelsen, Maria Høtoft; Müllertz, Anette; Rantanen, Jukka; Rades, Thomas; Bøtker, Johan

    2017-05-01

    A 3D printer was used to realise compartmental dosage forms containing multiple active pharmaceutical ingredient (API) formulations. This work demonstrates the microstructural characterisation of 3D printed solid dosage forms using X-ray computed microtomography (XμCT) and terahertz pulsed imaging (TPI). Printing was performed with either polyvinyl alcohol (PVA) or polylactic acid (PLA). The structures were examined by XμCT and TPI. Liquid self-nanoemulsifying drug delivery system (SNEDDS) formulations containing saquinavir and halofantrine were incorporated into the 3D printed compartmentalised structures and in vitro drug release determined. A clear difference in terms of pore structure between PVA and PLA prints was observed by extracting the porosity (5.5% for PVA and 0.2% for PLA prints), pore length and pore volume from the XμCT data. The print resolution and accuracy was characterised by XμCT and TPI on the basis of the computer-aided design (CAD) models of the dosage form (compartmentalised PVA structures were 7.5 ± 0.75% larger than designed; n = 3). The 3D printer can reproduce specific structures very accurately, whereas the 3D prints can deviate from the designed model. The microstructural information extracted by XμCT and TPI will assist to gain a better understanding about the performance of 3D printed dosage forms.

  8. Fast calculation of computer-generated holograms based on 3-D Fourier spectrum for omnidirectional diffraction from a 3-D voxel-based object.

    PubMed

    Sando, Yusuke; Barada, Daisuke; Yatagai, Toyohiko

    2012-09-10

    We have derived the basic spectral relation between a 3-D object and its 2-D diffracted wavefront by interpreting the diffraction calculation in the 3-D Fourier domain. Information on the 3-D object, which is inherent in the diffracted wavefront, becomes clear by using this relation. After the derivation, a method for obtaining the Fourier spectrum that is required to synthesize a hologram with a realistic sampling number for visible light is described. Finally, to verify the validity and the practicality of the above-mentioned spectral relation, fast calculation of a series of wavefronts radially diffracted from a 3-D voxel-based object is demonstrated.

  9. RADStation3G: a platform for cardiovascular image analysis integrating PACS, 3D+t visualization and grid computing.

    PubMed

    Perez, F; Huguet, J; Aguilar, R; Lara, L; Larrabide, I; Villa-Uriol, M C; López, J; Macho, J M; Rigo, A; Rosselló, J; Vera, S; Vivas, E; Fernàndez, J; Arbona, A; Frangi, A F; Herrero Jover, J; González Ballester, M A

    2013-06-01

    RADStation3G is a software platform for cardiovascular image analysis and surgery planning. It provides image visualization and management in 2D, 3D and 3D+t; data storage (images or operational results) in a PACS (using DICOM); and exploitation of patients' data such as images and pathologies. Further, it provides support for computationally expensive processes with grid technology. In this article we first introduce the platform and present a comparison with existing systems, according to the platform's modules (for cardiology, angiology, PACS archived enriched searching and grid computing), and then RADStation3G is described in detail.

  10. Full parallax viewing-angle enhanced computer-generated holographic 3D display system using integral lens array.

    PubMed

    Choi, Kyongsik; Kim, Joohwan; Lim, Yongjun; Lee, Byoungho

    2005-12-26

    A novel full parallax and viewing-angle enhanced computer-generated holographic (CGH) three-dimensional (3D) display system is proposed and implemented by combining an integral lens array and colorized synthetic phase holograms displayed on a phase-type spatial light modulator. For analyzing the viewing-angle limitations of our CGH 3D display system, we provide some theoretical background and introduce a simple ray-tracing method for 3D image reconstruction. From our method we can get continuously varying full parallax 3D images with the viewing angle about +/-6 degrees . To design the colorized phase holograms, we used a modified iterative Fourier transform algorithm and we could obtain a high diffraction efficiency (~92.5%) and a large signal-to-noise ratio (~11dB) from our simulation results. Finally we show some experimental results that verify our concept and demonstrate the full parallax viewing-angle enhanced color CGH display system.

  11. 3D printing of preclinical X-ray computed tomographic data sets.

    PubMed

    Doney, Evan; Krumdick, Lauren A; Diener, Justin M; Wathen, Connor A; Chapman, Sarah E; Stamile, Brian; Scott, Jeremiah E; Ravosa, Matthew J; Van Avermaete, Tony; Leevy, W Matthew

    2013-03-22

    Three-dimensional printing allows for the production of highly detailed objects through a process known as additive manufacturing. Traditional, mold-injection methods to create models or parts have several limitations, the most important of which is a difficulty in making highly complex products in a timely, cost-effective manner.(1) However, gradual improvements in three-dimensional printing technology have resulted in both high-end and economy instruments that are now available for the facile production of customized models.(2) These printers have the ability to extrude high-resolution objects with enough detail to accurately represent in vivo images generated from a preclinical X-ray CT scanner. With proper data collection, surface rendering, and stereolithographic editing, it is now possible and inexpensive to rapidly produce detailed skeletal and soft tissue structures from X-ray CT data. Even in the early stages of development, the anatomical models produced by three-dimensional printing appeal to both educators and researchers who can utilize the technology to improve visualization proficiency. (3, 4) The real benefits of this method result from the tangible experience a researcher can have with data that cannot be adequately conveyed through a computer screen. The translation of pre-clinical 3D data to a physical object that is an exact copy of the test subject is a powerful tool for visualization and communication, especially for relating imaging research to students, or those in other fields. Here, we provide a detailed method for printing plastic models of bone and organ structures derived from X-ray CT scans utilizing an Albira X-ray CT system in conjunction with PMOD, ImageJ, Meshlab, Netfabb, and ReplicatorG software packages.

  12. Enhancing training performance for brain-computer interface with object-directed 3D visual guidance.

    PubMed

    Liang, Shuang; Choi, Kup-Sze; Qin, Jing; Pang, Wai-Man; Heng, Pheng-Ann

    2016-11-01

    The accuracy of the classification of user intentions is essential for motor imagery (MI)-based brain-computer interface (BCI). Effective and appropriate training for users could help us produce the high reliability of mind decision making related with MI tasks. In this study, we aimed to investigate the effects of visual guidance on the classification performance of MI-based BCI. In this study, leveraging both the single-subject and the multi-subject BCI paradigms, we train and classify MI tasks with three different scenarios in a 3D virtual environment, including non-object-directed scenario, static-object-directed scenario, and dynamic object-directed scenario. Subjects are required to imagine left-hand or right-hand movement with the visual guidance. We demonstrate that the classification performances of left-hand and right-hand MI task have differences on these three scenarios, and confirm that both static-object-directed and dynamic object-directed scenarios could provide better classification accuracy than the non-object-directed case. We further indicate that both static-object-directed and dynamic object-directed scenarios could shorten the response time as well as be suitable applied in the case of small training data. In addition, experiment results demonstrate that the multi-subject BCI paradigm could improve the classification performance comparing with the single-subject paradigm. These results suggest that it is possible to improve the classification performance with the appropriate visual guidance and better BCI paradigm. We believe that our findings would have the potential for improving classification performance of MI-based BCI and being applied in the practical applications.

  13. Detection of bone erosions in early rheumatoid arthritis: 3D ultrasonography versus computed tomography.

    PubMed

    Peluso, G; Bosello, S L; Gremese, E; Mirone, L; Di Gregorio, F; Di Molfetta, V; Pirronti, T; Ferraccioli, G

    2015-07-01

    Three-dimensional (3D) volumetric ultrasonography (US) is an interesting tool that could improve the traditional approach to musculoskeletal US in rheumatology, due to its virtual operator independence and reduced examination time. The aim of this study was to investigate the performance of 3DUS in the detection of bone erosions in hand and wrist joints of early rheumatoid arthritis (ERA) patients, with computed tomography (CT) as the reference method. Twenty ERA patients without erosions on standard radiography of hands and wrists underwent 3DUS and CT evaluation of eleven joints: radiocarpal, intercarpal, ulnocarpal, second to fifth metacarpo-phalangeal (MCP), and second to fifth proximal interphalangeal (PIP) joints of dominant hand. Eleven (55.0%) patients were erosive with CT and ten of them were erosive also at 3DUS evaluation. In five patients, 3DUS identified cortical breaks that were not erosions at CT evaluation. Considering CT as the gold standard to identify erosive patients, the 3DUS sensitivity, specificity, PPV, and NPV were 0.9, 0.55, 0.71, and 0.83, respectively. A total of 32 erosions were detected with CT, 15 of them were also observed at the same sites with 3DUS, whereas 17 were not seen on 3DUS evaluation. The majority of these 3DUS false-negative erosions were in the wrist joints. Furthermore, 18 erosions recorded by 3DUS were false positive. The majority of these 3DUS false-positive erosions were located at PIP joints. This study underlines the limits of 3DUS in detecting individual bone erosion, mostly at the wrist, despite the good sensitivity in identifying erosive patients.

  14. Conceptual detector development and Monte Carlo simulation of a novel 3D breast computed tomography system

    NASA Astrophysics Data System (ADS)

    Ziegle, Jens; Müller, Bernhard H.; Neumann, Bernd; Hoeschen, Christoph

    2016-03-01

    A new 3D breast computed tomography (CT) system is under development enabling imaging of microcalcifications in a fully uncompressed breast including posterior chest wall tissue. The system setup uses a steered electron beam impinging on small tungsten targets surrounding the breast to emit X-rays. A realization of the corresponding detector concept is presented in this work and it is modeled through Monte Carlo simulations in order to quantify first characteristics of transmission and secondary photons. The modeled system comprises a vertical alignment of linear detectors hold by a case that also hosts the breast. Detectors are separated by gaps to allow the passage of X-rays towards the breast volume. The detectors located directly on the opposite side of the gaps detect incident X-rays. Mechanically moving parts in an imaging system increase the duration of image acquisition and thus can cause motion artifacts. So, a major advantage of the presented system design is the combination of the fixed detectors and the fast steering electron beam which enable a greatly reduced scan time. Thereby potential motion artifacts are reduced so that the visualization of small structures such as microcalcifications is improved. The result of the simulation of a single projection shows high attenuation by parts of the detector electronics causing low count levels at the opposing detectors which would require a flat field correction, but it also shows a secondary to transmission ratio of all counted X-rays of less than 1 percent. Additionally, a single slice with details of various sizes was reconstructed using filtered backprojection. The smallest detail which was still visible in the reconstructed image has a size of 0.2mm.

  15. 3D Printing of Preclinical X-ray Computed Tomographic Data Sets

    PubMed Central

    Doney, Evan; Krumdick, Lauren A.; Diener, Justin M.; Wathen, Connor A.; Chapman, Sarah E.; Stamile, Brian; Scott, Jeremiah E.; Ravosa, Matthew J.; Van Avermaete, Tony; Leevy, W. Matthew

    2013-01-01

    Three-dimensional printing allows for the production of highly detailed objects through a process known as additive manufacturing. Traditional, mold-injection methods to create models or parts have several limitations, the most important of which is a difficulty in making highly complex products in a timely, cost-effective manner.1 However, gradual improvements in three-dimensional printing technology have resulted in both high-end and economy instruments that are now available for the facile production of customized models.2 These printers have the ability to extrude high-resolution objects with enough detail to accurately represent in vivo images generated from a preclinical X-ray CT scanner. With proper data collection, surface rendering, and stereolithographic editing, it is now possible and inexpensive to rapidly produce detailed skeletal and soft tissue structures from X-ray CT data. Even in the early stages of development, the anatomical models produced by three-dimensional printing appeal to both educators and researchers who can utilize the technology to improve visualization proficiency. 3, 4 The real benefits of this method result from the tangible experience a researcher can have with data that cannot be adequately conveyed through a computer screen. The translation of pre-clinical 3D data to a physical object that is an exact copy of the test subject is a powerful tool for visualization and communication, especially for relating imaging research to students, or those in other fields. Here, we provide a detailed method for printing plastic models of bone and organ structures derived from X-ray CT scans utilizing an Albira X-ray CT system in conjunction with PMOD, ImageJ, Meshlab, Netfabb, and ReplicatorG software packages. PMID:23542702

  16. Wavelet applied to computer vision in astrophysics

    NASA Astrophysics Data System (ADS)

    Bijaoui, Albert; Slezak, Eric; Traina, Myriam

    2004-02-01

    Multiscale analyses can be provided by application wavelet transforms. For image processing purposes, we applied algorithms which imply a quasi isotropic vision. For a uniform noisy image, a wavelet coefficient W has a probability density function (PDF) p(W) which depends on the noise statistic. The PDF was determined for many statistical noises: Gauss, Poission, Rayleigh, exponential. For CCD observations, the Anscombe transform was generalized to a mixed Gasus+Poisson noise. From the discrete wavelet transform a set of significant wavelet coefficients (SSWC)is obtained. Many applications have been derived like denoising and deconvolution. Our main application is the decomposition of the image into objects, i.e the vision. At each scale an image labelling is performed in the SSWC. An interscale graph linking the fields of significant pixels is then obtained. The objects are identified using this graph. The wavelet coefficients of the tree related to a given object allow one to reconstruct its image by a classical inverse method. This vision model has been applied to astronomical images, improving the analysis of complex structures.

  17. Computer numerical control (CNC) lithography: light-motion synchronized UV-LED lithography for 3D microfabrication

    NASA Astrophysics Data System (ADS)

    Kim, Jungkwun; Yoon, Yong-Kyu; Allen, Mark G.

    2016-03-01

    This paper presents a computer-numerical-controlled ultraviolet light-emitting diode (CNC UV-LED) lithography scheme for three-dimensional (3D) microfabrication. The CNC lithography scheme utilizes sequential multi-angled UV light exposures along with a synchronized switchable UV light source to create arbitrary 3D light traces, which are transferred into the photosensitive resist. The system comprises a switchable, movable UV-LED array as a light source, a motorized tilt-rotational sample holder, and a computer-control unit. System operation is such that the tilt-rotational sample holder moves in a pre-programmed routine, and the UV-LED is illuminated only at desired positions of the sample holder during the desired time period, enabling the formation of complex 3D microstructures. This facilitates easy fabrication of complex 3D structures, which otherwise would have required multiple manual exposure steps as in the previous multidirectional 3D UV lithography approach. Since it is batch processed, processing time is far less than that of the 3D printing approach at the expense of some reduction in the degree of achievable 3D structure complexity. In order to produce uniform light intensity from the arrayed LED light source, the UV-LED array stage has been kept rotating during exposure. UV-LED 3D fabrication capability was demonstrated through a plurality of complex structures such as V-shaped micropillars, micropanels, a micro-‘hi’ structure, a micro-‘cat’s claw,’ a micro-‘horn,’ a micro-‘calla lily,’ a micro-‘cowboy’s hat,’ and a micro-‘table napkin’ array.

  18. Aspects of computer vision in surgical endoscopy

    NASA Astrophysics Data System (ADS)

    Rodin, Vincent; Ayache, Alain; Berreni, N.

    1993-09-01

    This work is related to a project of medical robotics applied to surgical endoscopy, led in collaboration with Doctor Berreni from the Saint Roch nursing-home in Perpignan, France). After taking what Doctor Berreni advises, two aspects of endoscopic color image processing have been brought out: (1) The help to the diagnosis by the automatic detection of the sick areas after a learning phase. (2) The 3D reconstruction of the analyzed cavity by using a zoom.

  19. Three dimensional computer vision: Potential applications with curvature tracking

    SciTech Connect

    Sanford, Adam

    1996-05-01

    The purpose of this project is to develop a method of tracking data points for computer vision systems using curvature analysis. This is of particular importance to fellow researchers at the Lab, who have developed a markerless video computer vision system and are in need of such a method to track data points. A three dimensional viewing program was created to analyze the geometry of surface patches. Virtual surfaces were plotted and processed by the program to determine the Mean and Gaussian Curvature parameters for each point on the surface, thus defining each point`s surface geometry type. The same computer processes are then applied to each frame of data acquired by the computer vision system to find surface {open_quotes}landmarks{close_quotes} that hold constant curvature during motion. Preliminary results indicate that curvature analysis shows great promise and could solve the tracking dilemma faced by those in the field of markerless imaging systems.

  20. Machine learning and computer vision approaches for phenotypic profiling.

    PubMed

    Grys, Ben T; Lo, Dara S; Sahin, Nil; Kraus, Oren Z; Morris, Quaid; Boone, Charles; Andrews, Brenda J

    2017-01-02

    With recent advances in high-throughput, automated microscopy, there has been an increased demand for effective computational strategies to analyze large-scale, image-based data. To this end, computer vision approaches have been applied to cell segmentation and feature extraction, whereas machine-learning approaches have been developed to aid in phenotypic classification and clustering of data acquired from biological images. Here, we provide an overview of the commonly used computer vision and machine-learning methods for generating and categorizing phenotypic profiles, highlighting the general biological utility of each approach. © 2017 Grys et al.

  1. Towards Computing Full 3D Seismic Sensitivity: The Axisymmetric Spectral Element Method

    NASA Astrophysics Data System (ADS)

    Nissen-Meyer, T.; Fournier, A.; Dahlen, F. A.

    2004-12-01

    Finite frequency tomography has recently provided detailed images of the Earth's deep interior. However, the Fréchet sensitivity kernels used in these inversions are calculated using ray theory and can therefore not account for D''-diffracted phases or any caustics in the wavefield, as e.g. occurring in phases used to map boundary layer topography. Our objective is to compile an extensive set of full sensitivity kernels based on seismic forward modeling to allow for inversion of any seismic phase. The sensitivity of the wavefield due to a scatterer off the theoretical ray path is generally determined by the convolution of the source-to-scatterer response with, using reciprocity, the receiver-to-scatterer response. Thus, exact kernels require the knowledge of the Green's function for the full moment tensor (i.e., source) and body forces (i.e., receiver components) throughout the model space and time. We develop an axisymmetric spectral element method for elastodynamics to serve this purpose. The axisymmetric approach takes advantage of the fact that kernels are computed upon a spherically symmetric Earth model. In this reduced dimension formulation, all moment tensor elements and single forces can be included and collectively unfold in six different 2D problems to be solved separately. The efficient simulations on a 2D mesh then allow for currently unattainable high resolution at low hardware requirements. The displacement field {u} for the 3D sphere can be expressed as {u}( {x}, {t})= {u}( {x}φ =0}, {t}) {f(φ ), where φ =0 represents the 2D computational domain and {f}(φ ) are trigonometric functions. Here, we describe the variational formalism for the full multipole source system and validate its implementation against normal mode solutions for the solid sphere. The global mesh includes several conforming coarsening levels to minimize grid spacing variations. In an effort of algorithmic optimization, the discretization is acquired on the basis of matrix

  2. Development, Verification and Use of Gust Modeling in the NASA Computational Fluid Dynamics Code FUN3D

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    2012-01-01

    This paper presents the implementation of gust modeling capability in the CFD code FUN3D. The gust capability is verified by computing the response of an airfoil to a sharp edged gust. This result is compared with the theoretical result. The present simulations will be compared with other CFD gust simulations. This paper also serves as a users manual for FUN3D gust analyses using a variety of gust profiles. Finally, the development of an Auto-Regressive Moving-Average (ARMA) reduced order gust model using a gust with a Gaussian profile in the FUN3D code is presented. ARMA simulated results of a sequence of one-minus-cosine gusts is shown to compare well with the same gust profile computed with FUN3D. Proper Orthogonal Decomposition (POD) is combined with the ARMA modeling technique to predict the time varying pressure coefficient increment distribution due to a novel gust profile. The aeroelastic response of a pitch/plunge airfoil to a gust environment is computed with a reduced order model, and compared with a direct simulation of the system in the FUN3D code. The two results are found to agree very well.

  3. 2D µ-Particle Image Velocimetry and Computational Fluid Dynamics Study Within a 3D Porous Scaffold.

    PubMed

    Campos Marin, A; Grossi, T; Bianchi, E; Dubini, G; Lacroix, D

    2017-05-01

    Transport properties of 3D scaffolds under fluid flow are critical for tissue development. Computational fluid dynamics (CFD) models can resolve 3D flows and nutrient concentrations in bioreactors at the scaffold-pore scale with high resolution. However, CFD models can be formulated based on assumptions and simplifications. μ-Particle image velocimetry (PIV) measurements should be performed to improve the reliability and predictive power of such models. Nevertheless, measuring fluid flow velocities within 3D scaffolds is challenging. The aim of this study was to develop a μPIV approach to allow the extraction of velocity fields from a 3D additive manufacturing scaffold using a conventional 2D μPIV system. The μ-computed tomography scaffold geometry was included in a CFD model where perfusion conditions were simulated. Good agreement was found between velocity profiles from measurements and computational results. Maximum velocities were found at the centre of the pore using both techniques with a difference of 12% which was expected according to the accuracy of the μPIV system. However, significant differences in terms of velocity magnitude were found near scaffold substrate due to scaffold brightness which affected the μPIV measurements. As a result, the limitations of the μPIV system only permits a partial validation of the CFD model. Nevertheless, the combination of both techniques allowed a detailed description of velocity maps within a 3D scaffold which is crucial to determine the optimal cell and nutrient transport properties.

  4. Integrating Mobile Robotics and Vision with Undergraduate Computer Science

    ERIC Educational Resources Information Center

    Cielniak, G.; Bellotto, N.; Duckett, T.

    2013-01-01

    This paper describes the integration of robotics education into an undergraduate Computer Science curriculum. The proposed approach delivers mobile robotics as well as covering the closely related field of Computer Vision and is directly linked to the research conducted at the authors' institution. The paper describes the most relevant details of…

  5. Integrating Mobile Robotics and Vision with Undergraduate Computer Science

    ERIC Educational Resources Information Center

    Cielniak, G.; Bellotto, N.; Duckett, T.

    2013-01-01

    This paper describes the integration of robotics education into an undergraduate Computer Science curriculum. The proposed approach delivers mobile robotics as well as covering the closely related field of Computer Vision and is directly linked to the research conducted at the authors' institution. The paper describes the most relevant details of…

  6. CS651 Computer Systems Security Foundations 3d Imagination Cyber Security Management Plan

    SciTech Connect

    Nielsen, Roy S.

    2015-03-02

    3d Imagination is a new company that bases its business on selling and improving 3d open source related hardware. The devices that they sell include 3d imagers, 3d printers, pick and place machines and laser etchers. They have a fast company intranet for ease in sharing, storing and printing large, complex 3d designs. They have an employee set that requires a variety of operating systems including Windows, Mac and a variety of Linux both for running business services as well as design and test machines. There are a wide variety of private networks for testing transfer rates to and from the 3d devices, without interference with other network tra c. They do video conferencing conferencing with customers and other designers. One of their machines is based on the project found at delta.firepick.org(Krassenstein, 2014; Biggs, 2014), which in future, will perform most of those functions. Their devices all include embedded systems, that may have full blown operating systems. Most of their systems are designed to have swappable parts, so when a new technology is born, it can be quickly adopted by people with 3d Imagination hardware. This company is producing a fair number of systems and components, however to get the funding they need to mass produce quality parts, so they are preparing for an IPO to raise the funds they need. They would like to have a cyber-security audit performed so they can give their investors con dence that they are protecting their data, customers information and printers in a proactive manner.

  7. An automatic time lapse camera setup for multi-vision 3D-reconstruction of morphological changes

    NASA Astrophysics Data System (ADS)

    Kaiser, Andreas; Neugirg, Fabian; Vláčilová, Markéta; Haas, Florian; Schmidt, Jürgen

    2015-04-01

    In the cause of a five year monitoring campaign on an Alpine slope in the Lainbach catchment, Southern Germany, high erosion rates were documented by terrestrial laser scanners (TLS) and unmanned airborne vehicles (UAV). As a result of different denudation processes erosion rates differ between summer and winter periods. The latter became evident after comparing both TLS-measured time spans. However, process differentiation and their contribution to the overall denudation remained challenging due to the discontinuous data collection every few weeks. In order to record these erosion processes an array of four automatically triggered cameras was installed capturing frames in ten minutes time steps as long as there is daylight. This work in progress aims to produce long term time series of morphodynamic changes in an active catchment by applying multi-vision structure from motion algorithms from a set of four cameras. Geomorphic processes caused by special weather phenomena can thus be interpreted in combination with climatic data acquired right next to the slope. Preliminary model calculations from the chosen perspectives produced adequate results with point counts of around 5.5 Mio for the 120m² slope. The point density proved to be dependent on the weather conditions, thus foggy and dull images will be excluded. A validation of the approach will be achieved by comparison of the time lapse point clouds with the TLS scans and UAV surveys as the monitoring will continue.

  8. Programmer's Guide for Subroutine PRNT3D. Physical Processes in Terrestrial and Aquatic Ecosystems, Computer Programs and Graphics Capabilities.

    ERIC Educational Resources Information Center

    Gales, Larry

    These materials were designed to be used by life science students for instruction in the application of physical theory to ecosystem operation. Most modules contain computer programs which are built around a particular application of a physical process. PRNT3D is a subroutine package which generates a variety of printed plot displays. The displays…

  9. DEVELOPMENT OF 3-D COMPUTER MODELS OF HUMAN LUNG MORPHOLOGY FOR IMPROOVED RISK ASSESSMENT OF INHALED PARTICULATE MATTER

    EPA Science Inventory

    DEVELOPMENT OF 3-D COMPUTER MODELS OF HUMAN LUNG MORPHOLOGY FOR IMPROVED RISK ASSESSMENT OF INHALED PARTICULATE MATTER

    Jeffry D. Schroeter, Curriculum in Toxicology, University of North Carolina, Chapel Hill, NC 27599; Ted B. Martonen, ETD, NHEERL, USEPA, RTP, NC 27711; Do...

  10. DEVELOPMENT OF 3-D COMPUTER MODELS OF HUMAN LUNG MORPHOLOGY FOR IMPROOVED RISK ASSESSMENT OF INHALED PARTICULATE MATTER

    EPA Science Inventory

    DEVELOPMENT OF 3-D COMPUTER MODELS OF HUMAN LUNG MORPHOLOGY FOR IMPROVED RISK ASSESSMENT OF INHALED PARTICULATE MATTER

    Jeffry D. Schroeter, Curriculum in Toxicology, University of North Carolina, Chapel Hill, NC 27599; Ted B. Martonen, ETD, NHEERL, USEPA, RTP, NC 27711; Do...

  11. User's Guide for Subroutine PLOT3D. Physical Processes in Terrestrial and Aquatic Ecosystems, Computer Programs and Graphics Capabilities.